2025-04-04 18:20:25,344 [ 815720 ] INFO : ClickHouse root is not set. Will use /home/ubuntu/_work/ClickHouse/ClickHouse (runner:53, check_args_and_update_paths) 2025-04-04 18:20:25,344 [ 815720 ] INFO : Cases dir is not set. Will use /home/ubuntu/_work/ClickHouse/ClickHouse/tests/integration (runner:97, check_args_and_update_paths) 2025-04-04 18:20:25,344 [ 815720 ] INFO : utils dir is not set. Will use /home/ubuntu/_work/ClickHouse/ClickHouse/utils (runner:108, check_args_and_update_paths) 2025-04-04 18:20:25,344 [ 815720 ] INFO : base_configs_dir: /home/ubuntu/_work/ClickHouse/ClickHouse/programs/server, binary: /home/ubuntu/_work/_temp/test/build/clickhouse, cases_dir: /home/ubuntu/_work/ClickHouse/ClickHouse/tests/integration (runner:110, check_args_and_update_paths) clickhouse_integration_tests_volume Running pytest container as: 'docker run --rm --name clickhouse_integration_tests_jwabdd --privileged --dns-search='.' --memory=30709035008 --volume=/home/ubuntu/_work/_temp/test/build/clickhouse-odbc-bridge:/clickhouse-odbc-bridge --volume=/home/ubuntu/_work/_temp/test/build/clickhouse:/clickhouse --volume=/home/ubuntu/_work/_temp/test/build/clickhouse-library-bridge:/clickhouse-library-bridge --volume=/home/ubuntu/_work/ClickHouse/ClickHouse/programs/server:/clickhouse-config --volume=/home/ubuntu/_work/ClickHouse/ClickHouse/tests/integration:/ClickHouse/tests/integration --volume=/home/ubuntu/_work/ClickHouse/ClickHouse/utils/backupview:/ClickHouse/utils/backupview --volume=/home/ubuntu/_work/ClickHouse/ClickHouse/utils/grpc-client/pb2:/ClickHouse/utils/grpc-client/pb2 --volume=/run:/run/host:ro --volume=clickhouse_integration_tests_volume:/var/lib/docker -e DOCKER_DOTNET_CLIENT_TAG=11de0b29a15d -e DOCKER_HELPER_TAG=5dc43a6382f0 -e DOCKER_BASE_TAG=6712d5cc610d -e DOCKER_KERBEROS_KDC_TAG=9391ecdee8d7 -e DOCKER_MYSQL_GOLANG_CLIENT_TAG=9bec2a638e6e -e DOCKER_MYSQL_JAVA_CLIENT_TAG=766bff31cfe4 -e DOCKER_MYSQL_JS_CLIENT_TAG=41ba7c2ec2a1 -e DOCKER_MYSQL_PHP_CLIENT_TAG=88be89c1e3b6 -e DOCKER_NGINX_DAV_TAG=b55ac9cd7519 -e DOCKER_POSTGRESQL_JAVA_CLIENT_TAG=a4eff5c7f4d6 -e DOCKER_PYTHON_BOTTLE_TAG=caad4729259e -e DOCKER_CLIENT_TIMEOUT=300 -e COMPOSE_HTTP_TIMEOUT=600 -e PYTHONUNBUFFERED=1 -e PYTEST_ADDOPTS=" -rfEps --run-id=2 --color=no --durations=0 test_settings_randomization/test.py::test_settings_randomization test_storage_hdfs/test.py::test_hdfsCluster test_storage_hdfs/test.py::test_virtual_columns_2 -vvv" altinityinfra/integration-tests-runner:cd6390247eca '. Start tests ============================= test session starts ============================== platform linux -- Python 3.10.12, pytest-7.4.4, pluggy-1.5.0 -- /usr/bin/python3 cachedir: .pytest_cache rootdir: /ClickHouse/tests/integration configfile: pytest.ini plugins: random-0.2, timeout-2.2.0, repeat-0.9.3, order-1.0.0, reportlog-0.4.0, xdist-3.5.0 timeout: 900.0s timeout method: signal timeout func_only: False collecting ... collected 3 items test_settings_randomization/test.py::test_settings_randomization FAILED [ 33%] test_storage_hdfs/test.py::test_hdfsCluster FAILED [ 66%] test_storage_hdfs/test.py::test_virtual_columns_2 FAILED [100%] =================================== FAILURES =================================== _________________________ test_settings_randomization __________________________ started_cluster = def test_settings_randomization(started_cluster): """ See tests/integration/helpers/random_settings.py """ def q(field, name): return int( node.query( f"SELECT {field} FROM system.settings WHERE name = '{name}'" ).strip() ) # setting set in test config is not overriden assert q("value", "max_block_size") == 59999 # some setting is randomized > assert q("changed", "max_joined_block_size_rows") == 1 E AssertionError: assert 0 == 1 E + where 0 = .q at 0x7f0e58e85fc0>('changed', 'max_joined_block_size_rows') test_settings_randomization/test.py:35: AssertionError ---------------------------- Captured stdout setup ----------------------------- Copy common default production configuration from /clickhouse-config. Files: config.xml, users.xml ------------------------------ Captured log setup ------------------------------ 2025-04-04 18:20:29 [ 632 ] DEBUG : Command:[docker ps | wc -l] (cluster.py:122, run_and_check) 2025-04-04 18:20:29 [ 632 ] DEBUG : Stdout:1 (cluster.py:146, run_and_check) 2025-04-04 18:20:29 [ 632 ] DEBUG : No running containers (conftest.py:96, cleanup_environment) 2025-04-04 18:20:29 [ 632 ] DEBUG : Pruning Docker networks (conftest.py:98, cleanup_environment) 2025-04-04 18:20:29 [ 632 ] DEBUG : Command:[docker network prune --force] (cluster.py:122, run_and_check) 2025-04-04 18:20:29 [ 632 ] DEBUG : Command:[sysctl net.ipv4.ip_local_port_range='55000 65535'] (cluster.py:122, run_and_check) 2025-04-04 18:20:29 [ 632 ] DEBUG : Stdout:net.ipv4.ip_local_port_range = 55000 65535 (cluster.py:146, run_and_check) 2025-04-04 18:20:29 [ 632 ] INFO : Running tests in /ClickHouse/tests/integration/test_settings_randomization/test.py (cluster.py:2793, start) 2025-04-04 18:20:29 [ 632 ] DEBUG : Cluster start called. is_up=False (cluster.py:2800, start) 2025-04-04 18:20:29 [ 632 ] DEBUG : Docker networks for project roottestsettingsrandomization are NETWORK ID NAME DRIVER SCOPE (cluster.py:873, print_all_docker_pieces) 2025-04-04 18:20:29 [ 632 ] DEBUG : Docker containers for project roottestsettingsrandomization are CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES (cluster.py:881, print_all_docker_pieces) 2025-04-04 18:20:29 [ 632 ] DEBUG : Docker volumes for project roottestsettingsrandomization are DRIVER VOLUME NAME (cluster.py:889, print_all_docker_pieces) 2025-04-04 18:20:29 [ 632 ] DEBUG : Cleanup called (cluster.py:894, cleanup) 2025-04-04 18:20:29 [ 632 ] DEBUG : Docker networks for project roottestsettingsrandomization are NETWORK ID NAME DRIVER SCOPE (cluster.py:873, print_all_docker_pieces) 2025-04-04 18:20:29 [ 632 ] DEBUG : Docker containers for project roottestsettingsrandomization are CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES (cluster.py:881, print_all_docker_pieces) 2025-04-04 18:20:29 [ 632 ] DEBUG : Docker volumes for project roottestsettingsrandomization are DRIVER VOLUME NAME (cluster.py:889, print_all_docker_pieces) 2025-04-04 18:20:29 [ 632 ] DEBUG : Command:[docker container list --all --filter name='^/roottestsettingsrandomization-.*-1$' --format '{{.ID}}:{{.Names}}'] (cluster.py:122, run_and_check) 2025-04-04 18:20:29 [ 632 ] DEBUG : Unstopped containers: {} (cluster.py:908, cleanup) 2025-04-04 18:20:29 [ 632 ] DEBUG : No running containers for project: roottestsettingsrandomization (cluster.py:922, cleanup) 2025-04-04 18:20:29 [ 632 ] DEBUG : Trying to prune unused networks... (cluster.py:928, cleanup) 2025-04-04 18:20:29 [ 632 ] DEBUG : Trying to prune unused images... (cluster.py:944, cleanup) 2025-04-04 18:20:29 [ 632 ] DEBUG : Command:[docker image prune -f] (cluster.py:122, run_and_check) 2025-04-04 18:20:29 [ 632 ] DEBUG : Stdout:Total reclaimed space: 0B (cluster.py:146, run_and_check) 2025-04-04 18:20:29 [ 632 ] DEBUG : Images pruned (cluster.py:947, cleanup) 2025-04-04 18:20:29 [ 632 ] DEBUG : Trying to prune unused volumes... (cluster.py:953, cleanup) 2025-04-04 18:20:29 [ 632 ] DEBUG : Command:[docker volume ls | wc -l] (cluster.py:122, run_and_check) 2025-04-04 18:20:29 [ 632 ] DEBUG : Stdout:1 (cluster.py:146, run_and_check) 2025-04-04 18:20:29 [ 632 ] DEBUG : Volumes pruned: 1 (cluster.py:958, cleanup) 2025-04-04 18:20:29 [ 632 ] DEBUG : Setup directory for instance: node1 (cluster.py:2813, start) 2025-04-04 18:20:29 [ 632 ] DEBUG : Create directory for configuration generated in this helper (cluster.py:4639, create_dir) 2025-04-04 18:20:29 [ 632 ] DEBUG : Create directory for common tests configuration (cluster.py:4644, create_dir) 2025-04-04 18:20:29 [ 632 ] DEBUG : Copy common configuration from helpers (cluster.py:4664, create_dir) 2025-04-04 18:20:29 [ 632 ] DEBUG : Generate and write macros file (cluster.py:4716, create_dir) 2025-04-04 18:20:29 [ 632 ] DEBUG : Copy custom test config files [] to /ClickHouse/tests/integration/test_settings_randomization/_instances-2/node1/configs/config.d (cluster.py:4752, create_dir) 2025-04-04 18:20:29 [ 632 ] DEBUG : Setup database dir /ClickHouse/tests/integration/test_settings_randomization/_instances-2/node1/database (cluster.py:4769, create_dir) 2025-04-04 18:20:29 [ 632 ] DEBUG : Setup logs dir /ClickHouse/tests/integration/test_settings_randomization/_instances-2/node1/logs (cluster.py:4780, create_dir) 2025-04-04 18:20:29 [ 632 ] DEBUG : Entrypoint cmd: ["clickhouse", "server", "--config-file=/etc/clickhouse-server/config.xml", "--log-file=/var/log/clickhouse-server/clickhouse-server.log", "--errorlog-file=/var/log/clickhouse-server/clickhouse-server.err.log", "--"] (cluster.py:4864, create_dir) 2025-04-04 18:20:29 [ 632 ] DEBUG : Env {'ASAN_OPTIONS': 'use_sigaltstack=0', 'TSAN_OPTIONS': 'use_sigaltstack=0', 'CLICKHOUSE_WATCHDOG_ENABLE': '0', 'CLICKHOUSE_NATS_TLS_SECURE': '0', 'LLVM_PROFILE_FILE': '/var/lib/clickhouse/server_%h_%p_%m.profraw'} stored in /ClickHouse/tests/integration/test_settings_randomization/_instances-2/.env (cluster.py:97, _create_env_file) 2025-04-04 18:20:29 [ 632 ] DEBUG : Trying paths: ['/root/.docker/config.json', '/root/.dockercfg'] (config.py:21, find_config_file) 2025-04-04 18:20:29 [ 632 ] DEBUG : No config file found (config.py:28, find_config_file) 2025-04-04 18:20:29 [ 632 ] DEBUG : Trying paths: ['/root/.docker/config.json', '/root/.dockercfg'] (config.py:21, find_config_file) 2025-04-04 18:20:29 [ 632 ] DEBUG : No config file found (config.py:28, find_config_file) 2025-04-04 18:20:29 [ 632 ] DEBUG : http://localhost:None "GET /version HTTP/1.1" 200 826 (connectionpool.py:547, _make_request) 2025-04-04 18:20:29 [ 632 ] DEBUG : Command:[docker compose --env-file /ClickHouse/tests/integration/test_settings_randomization/_instances-2/.env --project-name roottestsettingsrandomization --file /ClickHouse/tests/integration/test_settings_randomization/_instances-2/node1/docker-compose.yml pull] (cluster.py:122, run_and_check) 2025-04-04 18:20:40 [ 632 ] DEBUG : Stderr: node1 Pulling (cluster.py:148, run_and_check) 2025-04-04 18:20:40 [ 632 ] DEBUG : Stderr: node1 Pulled (cluster.py:148, run_and_check) 2025-04-04 18:20:40 [ 632 ] DEBUG : ('Trying to create ClickHouse instance by command %s', 'docker compose --env-file /ClickHouse/tests/integration/test_settings_randomization/_instances-2/.env --project-name roottestsettingsrandomization --file /ClickHouse/tests/integration/test_settings_randomization/_instances-2/node1/docker-compose.yml up -d --no-recreate') (cluster.py:3200, start) 2025-04-04 18:20:40 [ 632 ] DEBUG : Command:[docker compose --env-file /ClickHouse/tests/integration/test_settings_randomization/_instances-2/.env --project-name roottestsettingsrandomization --file /ClickHouse/tests/integration/test_settings_randomization/_instances-2/node1/docker-compose.yml up -d --no-recreate] (cluster.py:122, run_and_check) 2025-04-04 18:20:40 [ 632 ] DEBUG : Stderr: Network roottestsettingsrandomization_default Creating (cluster.py:148, run_and_check) 2025-04-04 18:20:40 [ 632 ] DEBUG : Stderr: Network roottestsettingsrandomization_default Created (cluster.py:148, run_and_check) 2025-04-04 18:20:40 [ 632 ] DEBUG : Stderr: Container roottestsettingsrandomization-node1-1 Creating (cluster.py:148, run_and_check) 2025-04-04 18:20:40 [ 632 ] DEBUG : Stderr: Container roottestsettingsrandomization-node1-1 Created (cluster.py:148, run_and_check) 2025-04-04 18:20:40 [ 632 ] DEBUG : Stderr: Container roottestsettingsrandomization-node1-1 Starting (cluster.py:148, run_and_check) 2025-04-04 18:20:40 [ 632 ] DEBUG : Stderr: Container roottestsettingsrandomization-node1-1 Started (cluster.py:148, run_and_check) 2025-04-04 18:20:40 [ 632 ] DEBUG : ClickHouse instance created (cluster.py:3208, start) 2025-04-04 18:20:40 [ 632 ] DEBUG : get_instance_ip instance_name=node1 (cluster.py:2082, get_instance_ip) 2025-04-04 18:20:40 [ 632 ] DEBUG : http://localhost:None "GET /v1.46/containers/roottestsettingsrandomization-node1-1/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2025-04-04 18:20:40 [ 632 ] DEBUG : get_instance_ip instance_name=node1 (cluster.py:2092, get_instance_global_ipv6) 2025-04-04 18:20:40 [ 632 ] DEBUG : http://localhost:None "GET /v1.46/containers/roottestsettingsrandomization-node1-1/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2025-04-04 18:20:40 [ 632 ] DEBUG : Waiting for ClickHouse start in node1, ip: 172.16.1.2... (cluster.py:3216, start) 2025-04-04 18:20:40 [ 632 ] DEBUG : http://localhost:None "GET /v1.46/containers/roottestsettingsrandomization-node1-1/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2025-04-04 18:20:40 [ 632 ] DEBUG : http://localhost:None "GET /v1.46/containers/451cc5669a8692e7b312fca61fb2b3f9a405b5a98d4f38ff649c0ef0a06e32b8/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2025-04-04 18:20:40 [ 632 ] DEBUG : http://localhost:None "GET /v1.46/containers/451cc5669a8692e7b312fca61fb2b3f9a405b5a98d4f38ff649c0ef0a06e32b8/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2025-04-04 18:20:41 [ 632 ] DEBUG : http://localhost:None "GET /v1.46/containers/451cc5669a8692e7b312fca61fb2b3f9a405b5a98d4f38ff649c0ef0a06e32b8/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2025-04-04 18:20:41 [ 632 ] DEBUG : http://localhost:None "GET /v1.46/containers/451cc5669a8692e7b312fca61fb2b3f9a405b5a98d4f38ff649c0ef0a06e32b8/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2025-04-04 18:20:41 [ 632 ] DEBUG : http://localhost:None "GET /v1.46/containers/451cc5669a8692e7b312fca61fb2b3f9a405b5a98d4f38ff649c0ef0a06e32b8/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2025-04-04 18:20:41 [ 632 ] DEBUG : ClickHouse node1 started (cluster.py:3220, start) ------------------------------ Captured log call ------------------------------- 2025-04-04 18:20:41 [ 632 ] DEBUG : Executing query SELECT value FROM system.settings WHERE name = 'max_block_size' on node1 (cluster.py:3677, query) 2025-04-04 18:20:41 [ 632 ] DEBUG : Executing query SELECT changed FROM system.settings WHERE name = 'max_joined_block_size_rows' on node1 (cluster.py:3677, query) ---------------------------- Captured log teardown ----------------------------- 2025-04-04 18:20:41 [ 632 ] DEBUG : Command:[docker compose --env-file /ClickHouse/tests/integration/test_settings_randomization/_instances-2/.env --project-name roottestsettingsrandomization --file /ClickHouse/tests/integration/test_settings_randomization/_instances-2/node1/docker-compose.yml stop --timeout 20] (cluster.py:122, run_and_check) 2025-04-04 18:20:44 [ 632 ] DEBUG : Stderr: Container roottestsettingsrandomization-node1-1 Stopping (cluster.py:148, run_and_check) 2025-04-04 18:20:44 [ 632 ] DEBUG : Stderr: Container roottestsettingsrandomization-node1-1 Stopped (cluster.py:148, run_and_check) 2025-04-04 18:20:44 [ 632 ] DEBUG : Command:[bash -c [ -f /ClickHouse/tests/integration/test_settings_randomization/_instances-2/node1/logs/stderr.log ] && zgrep -aH "==================" /ClickHouse/tests/integration/test_settings_randomization/_instances-2/node1/logs/stderr.log* | ( [ -z "" ] && cat || grep -v "$" ) || true] (cluster.py:122, run_and_check) 2025-04-04 18:20:44 [ 632 ] DEBUG : Command:[docker compose --env-file /ClickHouse/tests/integration/test_settings_randomization/_instances-2/.env --project-name roottestsettingsrandomization --file /ClickHouse/tests/integration/test_settings_randomization/_instances-2/node1/docker-compose.yml down --volumes] (cluster.py:122, run_and_check) 2025-04-04 18:20:45 [ 632 ] DEBUG : Stderr: Container roottestsettingsrandomization-node1-1 Stopping (cluster.py:148, run_and_check) 2025-04-04 18:20:45 [ 632 ] DEBUG : Stderr: Container roottestsettingsrandomization-node1-1 Stopped (cluster.py:148, run_and_check) 2025-04-04 18:20:45 [ 632 ] DEBUG : Stderr: Container roottestsettingsrandomization-node1-1 Removing (cluster.py:148, run_and_check) 2025-04-04 18:20:45 [ 632 ] DEBUG : Stderr: Container roottestsettingsrandomization-node1-1 Removed (cluster.py:148, run_and_check) 2025-04-04 18:20:45 [ 632 ] DEBUG : Stderr: Network roottestsettingsrandomization_default Removing (cluster.py:148, run_and_check) 2025-04-04 18:20:45 [ 632 ] DEBUG : Stderr: Network roottestsettingsrandomization_default Removed (cluster.py:148, run_and_check) 2025-04-04 18:20:45 [ 632 ] DEBUG : Cleanup called (cluster.py:894, cleanup) 2025-04-04 18:20:45 [ 632 ] DEBUG : Docker networks for project roottestsettingsrandomization are NETWORK ID NAME DRIVER SCOPE (cluster.py:873, print_all_docker_pieces) 2025-04-04 18:20:45 [ 632 ] DEBUG : Docker containers for project roottestsettingsrandomization are CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES (cluster.py:881, print_all_docker_pieces) 2025-04-04 18:20:45 [ 632 ] DEBUG : Docker volumes for project roottestsettingsrandomization are DRIVER VOLUME NAME (cluster.py:889, print_all_docker_pieces) 2025-04-04 18:20:45 [ 632 ] DEBUG : Command:[docker container list --all --filter name='^/roottestsettingsrandomization-.*-1$' --format '{{.ID}}:{{.Names}}'] (cluster.py:122, run_and_check) 2025-04-04 18:20:45 [ 632 ] DEBUG : Unstopped containers: {} (cluster.py:908, cleanup) 2025-04-04 18:20:45 [ 632 ] DEBUG : No running containers for project: roottestsettingsrandomization (cluster.py:922, cleanup) 2025-04-04 18:20:45 [ 632 ] DEBUG : Trying to prune unused networks... (cluster.py:928, cleanup) 2025-04-04 18:20:45 [ 632 ] DEBUG : Trying to prune unused images... (cluster.py:944, cleanup) 2025-04-04 18:20:45 [ 632 ] DEBUG : Command:[docker image prune -f] (cluster.py:122, run_and_check) 2025-04-04 18:20:45 [ 632 ] DEBUG : Stdout:Total reclaimed space: 0B (cluster.py:146, run_and_check) 2025-04-04 18:20:45 [ 632 ] DEBUG : Images pruned (cluster.py:947, cleanup) 2025-04-04 18:20:45 [ 632 ] DEBUG : Trying to prune unused volumes... (cluster.py:953, cleanup) 2025-04-04 18:20:45 [ 632 ] DEBUG : Command:[docker volume ls | wc -l] (cluster.py:122, run_and_check) 2025-04-04 18:20:45 [ 632 ] DEBUG : Stdout:1 (cluster.py:146, run_and_check) 2025-04-04 18:20:45 [ 632 ] DEBUG : Volumes pruned: 1 (cluster.py:958, cleanup) _______________________________ test_hdfsCluster _______________________________ started_cluster = def test_hdfsCluster(started_cluster): hdfs_api = started_cluster.hdfs_api fs = HdfsClient(hosts=started_cluster.hdfs_ip) dir = "/test_hdfsCluster" exists = fs.exists(dir) if exists: fs.delete(dir, recursive=True) fs.mkdirs(dir) hdfs_api.write_data("/test_hdfsCluster/file1", "1\n") hdfs_api.write_data("/test_hdfsCluster/file2", "2\n") hdfs_api.write_data("/test_hdfsCluster/file3", "3\n") expected = "1\tfile1\ttest_hdfsCluster/file1\n2\tfile2\ttest_hdfsCluster/file2\n3\tfile3\ttest_hdfsCluster/file3\n" query_id_pure = str(uuid.uuid4()) actual = node1.query( "select id, _file as file_name, _path as file_path from hdfs('hdfs://hdfs1:9000/test_hdfsCluster/file*', 'TSV', 'id UInt32') order by id", query_id=query_id_pure, ) assert actual == expected query_id_cluster = str(uuid.uuid4()) actual = node1.query( "select id, _file as file_name, _path as file_path from hdfsCluster('test_cluster_two_shards', 'hdfs://hdfs1:9000/test_hdfsCluster/file*', 'TSV', 'id UInt32') order by id", query_id=query_id_cluster, ) assert actual == expected query_id_cluster_alt_syntax = str(uuid.uuid4()) actual = node1.query( "select id, _file as file_name, _path as file_path from hdfs('hdfs://hdfs1:9000/test_hdfsCluster/file*', 'TSV', 'id UInt32') order by id", settings={"object_storage_cluster":"test_cluster_two_shards"}, query_id=query_id_cluster_alt_syntax, ) assert actual == expected node1.query("SYSTEM FLUSH LOGS") queries_pure = node1.query( f""" SELECT count() FROM system.query_log WHERE type='QueryFinish' AND initial_query_id='{query_id_pure}' """ ) assert int(queries_pure) == 1 queries_cluster = node1.query( f""" SELECT count() FROM system.query_log WHERE type='QueryFinish' AND initial_query_id='{query_id_cluster}' """ ) > assert int(queries_cluster) == 3 E AssertionError: assert 1 == 3 E + where 1 = int('1\n') test_storage_hdfs/test.py:597: AssertionError ---------------------------- Captured stdout setup ----------------------------- Copy common default production configuration from /clickhouse-config. Files: config.xml, users.xml ------------------------------ Captured log setup ------------------------------ 2025-04-04 18:20:45 [ 632 ] INFO : Running tests in /ClickHouse/tests/integration/test_storage_hdfs/test.py (cluster.py:2793, start) 2025-04-04 18:20:45 [ 632 ] DEBUG : Cluster start called. is_up=False (cluster.py:2800, start) 2025-04-04 18:20:45 [ 632 ] DEBUG : Docker networks for project rootteststoragehdfs are NETWORK ID NAME DRIVER SCOPE (cluster.py:873, print_all_docker_pieces) 2025-04-04 18:20:45 [ 632 ] DEBUG : Docker containers for project rootteststoragehdfs are CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES (cluster.py:881, print_all_docker_pieces) 2025-04-04 18:20:45 [ 632 ] DEBUG : Docker volumes for project rootteststoragehdfs are DRIVER VOLUME NAME (cluster.py:889, print_all_docker_pieces) 2025-04-04 18:20:45 [ 632 ] DEBUG : Cleanup called (cluster.py:894, cleanup) 2025-04-04 18:20:45 [ 632 ] DEBUG : Docker networks for project rootteststoragehdfs are NETWORK ID NAME DRIVER SCOPE (cluster.py:873, print_all_docker_pieces) 2025-04-04 18:20:45 [ 632 ] DEBUG : Docker containers for project rootteststoragehdfs are CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES (cluster.py:881, print_all_docker_pieces) 2025-04-04 18:20:45 [ 632 ] DEBUG : Docker volumes for project rootteststoragehdfs are DRIVER VOLUME NAME (cluster.py:889, print_all_docker_pieces) 2025-04-04 18:20:45 [ 632 ] DEBUG : Command:[docker container list --all --filter name='^/rootteststoragehdfs-.*-1$' --format '{{.ID}}:{{.Names}}'] (cluster.py:122, run_and_check) 2025-04-04 18:20:45 [ 632 ] DEBUG : Unstopped containers: {} (cluster.py:908, cleanup) 2025-04-04 18:20:45 [ 632 ] DEBUG : No running containers for project: rootteststoragehdfs (cluster.py:922, cleanup) 2025-04-04 18:20:45 [ 632 ] DEBUG : Trying to prune unused networks... (cluster.py:928, cleanup) 2025-04-04 18:20:45 [ 632 ] DEBUG : Trying to prune unused images... (cluster.py:944, cleanup) 2025-04-04 18:20:45 [ 632 ] DEBUG : Command:[docker image prune -f] (cluster.py:122, run_and_check) 2025-04-04 18:20:45 [ 632 ] DEBUG : Stdout:Total reclaimed space: 0B (cluster.py:146, run_and_check) 2025-04-04 18:20:45 [ 632 ] DEBUG : Images pruned (cluster.py:947, cleanup) 2025-04-04 18:20:45 [ 632 ] DEBUG : Trying to prune unused volumes... (cluster.py:953, cleanup) 2025-04-04 18:20:45 [ 632 ] DEBUG : Command:[docker volume ls | wc -l] (cluster.py:122, run_and_check) 2025-04-04 18:20:45 [ 632 ] DEBUG : Stdout:1 (cluster.py:146, run_and_check) 2025-04-04 18:20:45 [ 632 ] DEBUG : Volumes pruned: 1 (cluster.py:958, cleanup) 2025-04-04 18:20:45 [ 632 ] DEBUG : Setup directory for instance: node1 (cluster.py:2813, start) 2025-04-04 18:20:45 [ 632 ] DEBUG : Create directory for configuration generated in this helper (cluster.py:4639, create_dir) 2025-04-04 18:20:45 [ 632 ] DEBUG : Create directory for common tests configuration (cluster.py:4644, create_dir) 2025-04-04 18:20:45 [ 632 ] DEBUG : Copy common configuration from helpers (cluster.py:4664, create_dir) 2025-04-04 18:20:45 [ 632 ] DEBUG : Generate and write macros file (cluster.py:4716, create_dir) 2025-04-04 18:20:45 [ 632 ] DEBUG : Copy custom test config files ['/ClickHouse/tests/integration/test_storage_hdfs/configs/macro.xml', '/ClickHouse/tests/integration/test_storage_hdfs/configs/schema_cache.xml', '/ClickHouse/tests/integration/test_storage_hdfs/configs/cluster.xml'] to /ClickHouse/tests/integration/test_storage_hdfs/_instances-2/node1/configs/config.d (cluster.py:4752, create_dir) 2025-04-04 18:20:45 [ 632 ] DEBUG : Setup database dir /ClickHouse/tests/integration/test_storage_hdfs/_instances-2/node1/database (cluster.py:4769, create_dir) 2025-04-04 18:20:45 [ 632 ] DEBUG : Setup logs dir /ClickHouse/tests/integration/test_storage_hdfs/_instances-2/node1/logs (cluster.py:4780, create_dir) 2025-04-04 18:20:45 [ 632 ] DEBUG : Entrypoint cmd: ["clickhouse", "server", "--config-file=/etc/clickhouse-server/config.xml", "--log-file=/var/log/clickhouse-server/clickhouse-server.log", "--errorlog-file=/var/log/clickhouse-server/clickhouse-server.err.log", "--"] (cluster.py:4864, create_dir) 2025-04-04 18:20:45 [ 632 ] DEBUG : Env {'ASAN_OPTIONS': 'use_sigaltstack=0', 'TSAN_OPTIONS': 'use_sigaltstack=0', 'CLICKHOUSE_WATCHDOG_ENABLE': '0', 'CLICKHOUSE_NATS_TLS_SECURE': '0', 'LLVM_PROFILE_FILE': '/var/lib/clickhouse/server_%h_%p_%m.profraw', 'HDFS_HOST': 'hdfs1', 'HDFS_NAME_PORT': '50070', 'HDFS_DATA_PORT': '50075', 'HDFS_LOGS': '/ClickHouse/tests/integration/test_storage_hdfs/_instances-2/hdfs/logs', 'HDFS_FS': 'bind'} stored in /ClickHouse/tests/integration/test_storage_hdfs/_instances-2/.env (cluster.py:97, _create_env_file) 2025-04-04 18:20:45 [ 632 ] DEBUG : Trying paths: ['/root/.docker/config.json', '/root/.dockercfg'] (config.py:21, find_config_file) 2025-04-04 18:20:45 [ 632 ] DEBUG : No config file found (config.py:28, find_config_file) 2025-04-04 18:20:45 [ 632 ] DEBUG : Trying paths: ['/root/.docker/config.json', '/root/.dockercfg'] (config.py:21, find_config_file) 2025-04-04 18:20:45 [ 632 ] DEBUG : No config file found (config.py:28, find_config_file) 2025-04-04 18:20:45 [ 632 ] DEBUG : http://localhost:None "GET /version HTTP/1.1" 200 826 (connectionpool.py:547, _make_request) 2025-04-04 18:20:45 [ 632 ] DEBUG : Command:[docker compose --env-file /ClickHouse/tests/integration/test_storage_hdfs/_instances-2/.env --project-name rootteststoragehdfs --file /ClickHouse/tests/integration/test_storage_hdfs/_instances-2/node1/docker-compose.yml --file /ClickHouse/tests/integration/helpers/../../../tests/integration/compose/docker_compose_hdfs.yml pull] (cluster.py:122, run_and_check) 2025-04-04 18:20:56 [ 632 ] DEBUG : Stderr: node1 Pulling (cluster.py:148, run_and_check) 2025-04-04 18:20:56 [ 632 ] DEBUG : Stderr: hdfs1 Pulling (cluster.py:148, run_and_check) 2025-04-04 18:20:56 [ 632 ] DEBUG : Stderr: node1 Pulled (cluster.py:148, run_and_check) 2025-04-04 18:20:56 [ 632 ] DEBUG : Stderr: hdfs1 Pulled (cluster.py:148, run_and_check) 2025-04-04 18:20:56 [ 632 ] DEBUG : Setup HDFS (cluster.py:3064, start) 2025-04-04 18:20:56 [ 632 ] DEBUG : Command:[docker compose --project-name rootteststoragehdfs --env-file /ClickHouse/tests/integration/test_storage_hdfs/_instances-2/.env --file /ClickHouse/tests/integration/helpers/../../../tests/integration/compose/docker_compose_hdfs.yml --verbose up -d] (cluster.py:122, run_and_check) 2025-04-04 18:20:56 [ 632 ] DEBUG : Stderr:time="2025-04-04T18:20:56Z" level=trace msg="Docker Desktop integration not enabled" (cluster.py:148, run_and_check) 2025-04-04 18:20:56 [ 632 ] DEBUG : Stderr: Network rootteststoragehdfs_default Creating (cluster.py:148, run_and_check) 2025-04-04 18:20:56 [ 632 ] DEBUG : Stderr: Network rootteststoragehdfs_default Created (cluster.py:148, run_and_check) 2025-04-04 18:20:56 [ 632 ] DEBUG : Stderr: Container rootteststoragehdfs-hdfs1-1 Creating (cluster.py:148, run_and_check) 2025-04-04 18:20:56 [ 632 ] DEBUG : Stderr: Container rootteststoragehdfs-hdfs1-1 Created (cluster.py:148, run_and_check) 2025-04-04 18:20:56 [ 632 ] DEBUG : Stderr: Container rootteststoragehdfs-hdfs1-1 Starting (cluster.py:148, run_and_check) 2025-04-04 18:20:56 [ 632 ] DEBUG : Stderr: Container rootteststoragehdfs-hdfs1-1 Started (cluster.py:148, run_and_check) 2025-04-04 18:20:56 [ 632 ] DEBUG : Stderr:time="2025-04-04T18:20:56Z" level=debug msg="otel error" error="" (cluster.py:148, run_and_check) 2025-04-04 18:20:56 [ 632 ] DEBUG : Stderr:time="2025-04-04T18:20:56Z" level=debug msg="otel error" error="" (cluster.py:148, run_and_check) 2025-04-04 18:20:56 [ 632 ] DEBUG : get_instance_ip instance_name=hdfs1 (cluster.py:2082, get_instance_ip) 2025-04-04 18:20:56 [ 632 ] DEBUG : http://localhost:None "GET /v1.46/containers/rootteststoragehdfs-hdfs1-1/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2025-04-04 18:20:56 [ 632 ] DEBUG : write_data protocol:http host:hdfs1 port:50070 path: /somefilewithrandomname222 user:root, principal:None (hdfs_api.py:194, write_data) 2025-04-04 18:20:56 [ 632 ] DEBUG : CALL: {'url': 'http://172.16.1.2:50070/webhdfs/v1/somefilewithrandomname222?op=CREATE', 'allow_redirects': False, 'headers': {'host': '172.16.1.2'}, 'params': {'overwrite': 'true'}, 'verify': False, 'auth': None} (hdfs_api.py:131, req_wrapper) 2025-04-04 18:20:56 [ 632 ] DEBUG : Starting new HTTP connection (1): 172.16.1.2:50070 (connectionpool.py:245, _new_conn) 2025-04-04 18:20:56 [ 632 ] ERROR : Can't connect to HDFS or preparations are not done yet HTTPConnectionPool(host='172.16.1.2', port=50070): Max retries exceeded with url: /webhdfs/v1/somefilewithrandomname222?op=CREATE&overwrite=true (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) (cluster.py:2572, wait_hdfs_to_start) Traceback (most recent call last): File "/usr/local/lib/python3.10/dist-packages/urllib3/connection.py", line 203, in _new_conn sock = connection.create_connection( File "/usr/local/lib/python3.10/dist-packages/urllib3/util/connection.py", line 85, in create_connection raise err File "/usr/local/lib/python3.10/dist-packages/urllib3/util/connection.py", line 73, in create_connection sock.connect(sa) ConnectionRefusedError: [Errno 111] Connection refused The above exception was the direct cause of the following exception: Traceback (most recent call last): File "/usr/local/lib/python3.10/dist-packages/urllib3/connectionpool.py", line 791, in urlopen response = self._make_request( File "/usr/local/lib/python3.10/dist-packages/urllib3/connectionpool.py", line 497, in _make_request conn.request( File "/usr/local/lib/python3.10/dist-packages/urllib3/connection.py", line 395, in request self.endheaders() File "/usr/lib/python3.10/http/client.py", line 1278, in endheaders self._send_output(message_body, encode_chunked=encode_chunked) File "/usr/lib/python3.10/http/client.py", line 1038, in _send_output self.send(msg) File "/usr/lib/python3.10/http/client.py", line 976, in send self.connect() File "/usr/local/lib/python3.10/dist-packages/urllib3/connection.py", line 243, in connect self.sock = self._new_conn() File "/usr/local/lib/python3.10/dist-packages/urllib3/connection.py", line 218, in _new_conn raise NewConnectionError( urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused The above exception was the direct cause of the following exception: Traceback (most recent call last): File "/usr/local/lib/python3.10/dist-packages/requests/adapters.py", line 486, in send resp = conn.urlopen( File "/usr/local/lib/python3.10/dist-packages/urllib3/connectionpool.py", line 845, in urlopen retries = retries.increment( File "/usr/local/lib/python3.10/dist-packages/urllib3/util/retry.py", line 515, in increment raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='172.16.1.2', port=50070): Max retries exceeded with url: /webhdfs/v1/somefilewithrandomname222?op=CREATE&overwrite=true (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/ClickHouse/tests/integration/helpers/cluster.py", line 2565, in wait_hdfs_to_start self.hdfs_api.write_data("/somefilewithrandomname222", "1") File "/ClickHouse/tests/integration/helpers/hdfs_api.py", line 211, in write_data response = self.req_wrapper( File "/ClickHouse/tests/integration/helpers/hdfs_api.py", line 132, in req_wrapper response_data = func(**kwargs) File "/usr/local/lib/python3.10/dist-packages/requests/api.py", line 130, in put return request("put", url, data=data, **kwargs) File "/usr/local/lib/python3.10/dist-packages/requests/api.py", line 59, in request return session.request(method=method, url=url, **kwargs) File "/usr/local/lib/python3.10/dist-packages/requests/sessions.py", line 589, in request resp = self.send(prep, **send_kwargs) File "/usr/local/lib/python3.10/dist-packages/requests/sessions.py", line 703, in send r = adapter.send(request, **kwargs) File "/usr/local/lib/python3.10/dist-packages/requests/adapters.py", line 519, in send raise ConnectionError(e, request=request) requests.exceptions.ConnectionError: HTTPConnectionPool(host='172.16.1.2', port=50070): Max retries exceeded with url: /webhdfs/v1/somefilewithrandomname222?op=CREATE&overwrite=true (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 2025-04-04 18:20:57 [ 632 ] DEBUG : write_data protocol:http host:hdfs1 port:50070 path: /somefilewithrandomname222 user:root, principal:None (hdfs_api.py:194, write_data) 2025-04-04 18:20:57 [ 632 ] DEBUG : CALL: {'url': 'http://172.16.1.2:50070/webhdfs/v1/somefilewithrandomname222?op=CREATE', 'allow_redirects': False, 'headers': {'host': '172.16.1.2'}, 'params': {'overwrite': 'true'}, 'verify': False, 'auth': None} (hdfs_api.py:131, req_wrapper) 2025-04-04 18:20:57 [ 632 ] DEBUG : Starting new HTTP connection (1): 172.16.1.2:50070 (connectionpool.py:245, _new_conn) 2025-04-04 18:20:57 [ 632 ] ERROR : Can't connect to HDFS or preparations are not done yet HTTPConnectionPool(host='172.16.1.2', port=50070): Max retries exceeded with url: /webhdfs/v1/somefilewithrandomname222?op=CREATE&overwrite=true (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) (cluster.py:2572, wait_hdfs_to_start) Traceback (most recent call last): File "/usr/local/lib/python3.10/dist-packages/urllib3/connection.py", line 203, in _new_conn sock = connection.create_connection( File "/usr/local/lib/python3.10/dist-packages/urllib3/util/connection.py", line 85, in create_connection raise err File "/usr/local/lib/python3.10/dist-packages/urllib3/util/connection.py", line 73, in create_connection sock.connect(sa) ConnectionRefusedError: [Errno 111] Connection refused The above exception was the direct cause of the following exception: Traceback (most recent call last): File "/usr/local/lib/python3.10/dist-packages/urllib3/connectionpool.py", line 791, in urlopen response = self._make_request( File "/usr/local/lib/python3.10/dist-packages/urllib3/connectionpool.py", line 497, in _make_request conn.request( File "/usr/local/lib/python3.10/dist-packages/urllib3/connection.py", line 395, in request self.endheaders() File "/usr/lib/python3.10/http/client.py", line 1278, in endheaders self._send_output(message_body, encode_chunked=encode_chunked) File "/usr/lib/python3.10/http/client.py", line 1038, in _send_output self.send(msg) File "/usr/lib/python3.10/http/client.py", line 976, in send self.connect() File "/usr/local/lib/python3.10/dist-packages/urllib3/connection.py", line 243, in connect self.sock = self._new_conn() File "/usr/local/lib/python3.10/dist-packages/urllib3/connection.py", line 218, in _new_conn raise NewConnectionError( urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused The above exception was the direct cause of the following exception: Traceback (most recent call last): File "/usr/local/lib/python3.10/dist-packages/requests/adapters.py", line 486, in send resp = conn.urlopen( File "/usr/local/lib/python3.10/dist-packages/urllib3/connectionpool.py", line 845, in urlopen retries = retries.increment( File "/usr/local/lib/python3.10/dist-packages/urllib3/util/retry.py", line 515, in increment raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='172.16.1.2', port=50070): Max retries exceeded with url: /webhdfs/v1/somefilewithrandomname222?op=CREATE&overwrite=true (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/ClickHouse/tests/integration/helpers/cluster.py", line 2565, in wait_hdfs_to_start self.hdfs_api.write_data("/somefilewithrandomname222", "1") File "/ClickHouse/tests/integration/helpers/hdfs_api.py", line 211, in write_data response = self.req_wrapper( File "/ClickHouse/tests/integration/helpers/hdfs_api.py", line 132, in req_wrapper response_data = func(**kwargs) File "/usr/local/lib/python3.10/dist-packages/requests/api.py", line 130, in put return request("put", url, data=data, **kwargs) File "/usr/local/lib/python3.10/dist-packages/requests/api.py", line 59, in request return session.request(method=method, url=url, **kwargs) File "/usr/local/lib/python3.10/dist-packages/requests/sessions.py", line 589, in request resp = self.send(prep, **send_kwargs) File "/usr/local/lib/python3.10/dist-packages/requests/sessions.py", line 703, in send r = adapter.send(request, **kwargs) File "/usr/local/lib/python3.10/dist-packages/requests/adapters.py", line 519, in send raise ConnectionError(e, request=request) requests.exceptions.ConnectionError: HTTPConnectionPool(host='172.16.1.2', port=50070): Max retries exceeded with url: /webhdfs/v1/somefilewithrandomname222?op=CREATE&overwrite=true (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 2025-04-04 18:20:58 [ 632 ] DEBUG : write_data protocol:http host:hdfs1 port:50070 path: /somefilewithrandomname222 user:root, principal:None (hdfs_api.py:194, write_data) 2025-04-04 18:20:58 [ 632 ] DEBUG : CALL: {'url': 'http://172.16.1.2:50070/webhdfs/v1/somefilewithrandomname222?op=CREATE', 'allow_redirects': False, 'headers': {'host': '172.16.1.2'}, 'params': {'overwrite': 'true'}, 'verify': False, 'auth': None} (hdfs_api.py:131, req_wrapper) 2025-04-04 18:20:58 [ 632 ] DEBUG : Starting new HTTP connection (1): 172.16.1.2:50070 (connectionpool.py:245, _new_conn) 2025-04-04 18:20:58 [ 632 ] ERROR : Can't connect to HDFS or preparations are not done yet HTTPConnectionPool(host='172.16.1.2', port=50070): Max retries exceeded with url: /webhdfs/v1/somefilewithrandomname222?op=CREATE&overwrite=true (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) (cluster.py:2572, wait_hdfs_to_start) Traceback (most recent call last): File "/usr/local/lib/python3.10/dist-packages/urllib3/connection.py", line 203, in _new_conn sock = connection.create_connection( File "/usr/local/lib/python3.10/dist-packages/urllib3/util/connection.py", line 85, in create_connection raise err File "/usr/local/lib/python3.10/dist-packages/urllib3/util/connection.py", line 73, in create_connection sock.connect(sa) ConnectionRefusedError: [Errno 111] Connection refused The above exception was the direct cause of the following exception: Traceback (most recent call last): File "/usr/local/lib/python3.10/dist-packages/urllib3/connectionpool.py", line 791, in urlopen response = self._make_request( File "/usr/local/lib/python3.10/dist-packages/urllib3/connectionpool.py", line 497, in _make_request conn.request( File "/usr/local/lib/python3.10/dist-packages/urllib3/connection.py", line 395, in request self.endheaders() File "/usr/lib/python3.10/http/client.py", line 1278, in endheaders self._send_output(message_body, encode_chunked=encode_chunked) File "/usr/lib/python3.10/http/client.py", line 1038, in _send_output self.send(msg) File "/usr/lib/python3.10/http/client.py", line 976, in send self.connect() File "/usr/local/lib/python3.10/dist-packages/urllib3/connection.py", line 243, in connect self.sock = self._new_conn() File "/usr/local/lib/python3.10/dist-packages/urllib3/connection.py", line 218, in _new_conn raise NewConnectionError( urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused The above exception was the direct cause of the following exception: Traceback (most recent call last): File "/usr/local/lib/python3.10/dist-packages/requests/adapters.py", line 486, in send resp = conn.urlopen( File "/usr/local/lib/python3.10/dist-packages/urllib3/connectionpool.py", line 845, in urlopen retries = retries.increment( File "/usr/local/lib/python3.10/dist-packages/urllib3/util/retry.py", line 515, in increment raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='172.16.1.2', port=50070): Max retries exceeded with url: /webhdfs/v1/somefilewithrandomname222?op=CREATE&overwrite=true (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/ClickHouse/tests/integration/helpers/cluster.py", line 2565, in wait_hdfs_to_start self.hdfs_api.write_data("/somefilewithrandomname222", "1") File "/ClickHouse/tests/integration/helpers/hdfs_api.py", line 211, in write_data response = self.req_wrapper( File "/ClickHouse/tests/integration/helpers/hdfs_api.py", line 132, in req_wrapper response_data = func(**kwargs) File "/usr/local/lib/python3.10/dist-packages/requests/api.py", line 130, in put return request("put", url, data=data, **kwargs) File "/usr/local/lib/python3.10/dist-packages/requests/api.py", line 59, in request return session.request(method=method, url=url, **kwargs) File "/usr/local/lib/python3.10/dist-packages/requests/sessions.py", line 589, in request resp = self.send(prep, **send_kwargs) File "/usr/local/lib/python3.10/dist-packages/requests/sessions.py", line 703, in send r = adapter.send(request, **kwargs) File "/usr/local/lib/python3.10/dist-packages/requests/adapters.py", line 519, in send raise ConnectionError(e, request=request) requests.exceptions.ConnectionError: HTTPConnectionPool(host='172.16.1.2', port=50070): Max retries exceeded with url: /webhdfs/v1/somefilewithrandomname222?op=CREATE&overwrite=true (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 2025-04-04 18:20:59 [ 632 ] DEBUG : write_data protocol:http host:hdfs1 port:50070 path: /somefilewithrandomname222 user:root, principal:None (hdfs_api.py:194, write_data) 2025-04-04 18:20:59 [ 632 ] DEBUG : CALL: {'url': 'http://172.16.1.2:50070/webhdfs/v1/somefilewithrandomname222?op=CREATE', 'allow_redirects': False, 'headers': {'host': '172.16.1.2'}, 'params': {'overwrite': 'true'}, 'verify': False, 'auth': None} (hdfs_api.py:131, req_wrapper) 2025-04-04 18:20:59 [ 632 ] DEBUG : Starting new HTTP connection (1): 172.16.1.2:50070 (connectionpool.py:245, _new_conn) 2025-04-04 18:20:59 [ 632 ] ERROR : Can't connect to HDFS or preparations are not done yet HTTPConnectionPool(host='172.16.1.2', port=50070): Max retries exceeded with url: /webhdfs/v1/somefilewithrandomname222?op=CREATE&overwrite=true (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) (cluster.py:2572, wait_hdfs_to_start) Traceback (most recent call last): File "/usr/local/lib/python3.10/dist-packages/urllib3/connection.py", line 203, in _new_conn sock = connection.create_connection( File "/usr/local/lib/python3.10/dist-packages/urllib3/util/connection.py", line 85, in create_connection raise err File "/usr/local/lib/python3.10/dist-packages/urllib3/util/connection.py", line 73, in create_connection sock.connect(sa) ConnectionRefusedError: [Errno 111] Connection refused The above exception was the direct cause of the following exception: Traceback (most recent call last): File "/usr/local/lib/python3.10/dist-packages/urllib3/connectionpool.py", line 791, in urlopen response = self._make_request( File "/usr/local/lib/python3.10/dist-packages/urllib3/connectionpool.py", line 497, in _make_request conn.request( File "/usr/local/lib/python3.10/dist-packages/urllib3/connection.py", line 395, in request self.endheaders() File "/usr/lib/python3.10/http/client.py", line 1278, in endheaders self._send_output(message_body, encode_chunked=encode_chunked) File "/usr/lib/python3.10/http/client.py", line 1038, in _send_output self.send(msg) File "/usr/lib/python3.10/http/client.py", line 976, in send self.connect() File "/usr/local/lib/python3.10/dist-packages/urllib3/connection.py", line 243, in connect self.sock = self._new_conn() File "/usr/local/lib/python3.10/dist-packages/urllib3/connection.py", line 218, in _new_conn raise NewConnectionError( urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused The above exception was the direct cause of the following exception: Traceback (most recent call last): File "/usr/local/lib/python3.10/dist-packages/requests/adapters.py", line 486, in send resp = conn.urlopen( File "/usr/local/lib/python3.10/dist-packages/urllib3/connectionpool.py", line 845, in urlopen retries = retries.increment( File "/usr/local/lib/python3.10/dist-packages/urllib3/util/retry.py", line 515, in increment raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='172.16.1.2', port=50070): Max retries exceeded with url: /webhdfs/v1/somefilewithrandomname222?op=CREATE&overwrite=true (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/ClickHouse/tests/integration/helpers/cluster.py", line 2565, in wait_hdfs_to_start self.hdfs_api.write_data("/somefilewithrandomname222", "1") File "/ClickHouse/tests/integration/helpers/hdfs_api.py", line 211, in write_data response = self.req_wrapper( File "/ClickHouse/tests/integration/helpers/hdfs_api.py", line 132, in req_wrapper response_data = func(**kwargs) File "/usr/local/lib/python3.10/dist-packages/requests/api.py", line 130, in put return request("put", url, data=data, **kwargs) File "/usr/local/lib/python3.10/dist-packages/requests/api.py", line 59, in request return session.request(method=method, url=url, **kwargs) File "/usr/local/lib/python3.10/dist-packages/requests/sessions.py", line 589, in request resp = self.send(prep, **send_kwargs) File "/usr/local/lib/python3.10/dist-packages/requests/sessions.py", line 703, in send r = adapter.send(request, **kwargs) File "/usr/local/lib/python3.10/dist-packages/requests/adapters.py", line 519, in send raise ConnectionError(e, request=request) requests.exceptions.ConnectionError: HTTPConnectionPool(host='172.16.1.2', port=50070): Max retries exceeded with url: /webhdfs/v1/somefilewithrandomname222?op=CREATE&overwrite=true (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 2025-04-04 18:21:00 [ 632 ] DEBUG : write_data protocol:http host:hdfs1 port:50070 path: /somefilewithrandomname222 user:root, principal:None (hdfs_api.py:194, write_data) 2025-04-04 18:21:00 [ 632 ] DEBUG : CALL: {'url': 'http://172.16.1.2:50070/webhdfs/v1/somefilewithrandomname222?op=CREATE', 'allow_redirects': False, 'headers': {'host': '172.16.1.2'}, 'params': {'overwrite': 'true'}, 'verify': False, 'auth': None} (hdfs_api.py:131, req_wrapper) 2025-04-04 18:21:00 [ 632 ] DEBUG : Starting new HTTP connection (1): 172.16.1.2:50070 (connectionpool.py:245, _new_conn) 2025-04-04 18:21:00 [ 632 ] ERROR : Can't connect to HDFS or preparations are not done yet HTTPConnectionPool(host='172.16.1.2', port=50070): Max retries exceeded with url: /webhdfs/v1/somefilewithrandomname222?op=CREATE&overwrite=true (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) (cluster.py:2572, wait_hdfs_to_start) Traceback (most recent call last): File "/usr/local/lib/python3.10/dist-packages/urllib3/connection.py", line 203, in _new_conn sock = connection.create_connection( File "/usr/local/lib/python3.10/dist-packages/urllib3/util/connection.py", line 85, in create_connection raise err File "/usr/local/lib/python3.10/dist-packages/urllib3/util/connection.py", line 73, in create_connection sock.connect(sa) ConnectionRefusedError: [Errno 111] Connection refused The above exception was the direct cause of the following exception: Traceback (most recent call last): File "/usr/local/lib/python3.10/dist-packages/urllib3/connectionpool.py", line 791, in urlopen response = self._make_request( File "/usr/local/lib/python3.10/dist-packages/urllib3/connectionpool.py", line 497, in _make_request conn.request( File "/usr/local/lib/python3.10/dist-packages/urllib3/connection.py", line 395, in request self.endheaders() File "/usr/lib/python3.10/http/client.py", line 1278, in endheaders self._send_output(message_body, encode_chunked=encode_chunked) File "/usr/lib/python3.10/http/client.py", line 1038, in _send_output self.send(msg) File "/usr/lib/python3.10/http/client.py", line 976, in send self.connect() File "/usr/local/lib/python3.10/dist-packages/urllib3/connection.py", line 243, in connect self.sock = self._new_conn() File "/usr/local/lib/python3.10/dist-packages/urllib3/connection.py", line 218, in _new_conn raise NewConnectionError( urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused The above exception was the direct cause of the following exception: Traceback (most recent call last): File "/usr/local/lib/python3.10/dist-packages/requests/adapters.py", line 486, in send resp = conn.urlopen( File "/usr/local/lib/python3.10/dist-packages/urllib3/connectionpool.py", line 845, in urlopen retries = retries.increment( File "/usr/local/lib/python3.10/dist-packages/urllib3/util/retry.py", line 515, in increment raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='172.16.1.2', port=50070): Max retries exceeded with url: /webhdfs/v1/somefilewithrandomname222?op=CREATE&overwrite=true (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/ClickHouse/tests/integration/helpers/cluster.py", line 2565, in wait_hdfs_to_start self.hdfs_api.write_data("/somefilewithrandomname222", "1") File "/ClickHouse/tests/integration/helpers/hdfs_api.py", line 211, in write_data response = self.req_wrapper( File "/ClickHouse/tests/integration/helpers/hdfs_api.py", line 132, in req_wrapper response_data = func(**kwargs) File "/usr/local/lib/python3.10/dist-packages/requests/api.py", line 130, in put return request("put", url, data=data, **kwargs) File "/usr/local/lib/python3.10/dist-packages/requests/api.py", line 59, in request return session.request(method=method, url=url, **kwargs) File "/usr/local/lib/python3.10/dist-packages/requests/sessions.py", line 589, in request resp = self.send(prep, **send_kwargs) File "/usr/local/lib/python3.10/dist-packages/requests/sessions.py", line 703, in send r = adapter.send(request, **kwargs) File "/usr/local/lib/python3.10/dist-packages/requests/adapters.py", line 519, in send raise ConnectionError(e, request=request) requests.exceptions.ConnectionError: HTTPConnectionPool(host='172.16.1.2', port=50070): Max retries exceeded with url: /webhdfs/v1/somefilewithrandomname222?op=CREATE&overwrite=true (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 2025-04-04 18:21:01 [ 632 ] DEBUG : write_data protocol:http host:hdfs1 port:50070 path: /somefilewithrandomname222 user:root, principal:None (hdfs_api.py:194, write_data) 2025-04-04 18:21:01 [ 632 ] DEBUG : CALL: {'url': 'http://172.16.1.2:50070/webhdfs/v1/somefilewithrandomname222?op=CREATE', 'allow_redirects': False, 'headers': {'host': '172.16.1.2'}, 'params': {'overwrite': 'true'}, 'verify': False, 'auth': None} (hdfs_api.py:131, req_wrapper) 2025-04-04 18:21:01 [ 632 ] DEBUG : Starting new HTTP connection (1): 172.16.1.2:50070 (connectionpool.py:245, _new_conn) 2025-04-04 18:21:02 [ 632 ] ERROR : Can't connect to HDFS or preparations are not done yet HTTPConnectionPool(host='172.16.1.2', port=50070): Max retries exceeded with url: /webhdfs/v1/somefilewithrandomname222?op=CREATE&overwrite=true (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) (cluster.py:2572, wait_hdfs_to_start) Traceback (most recent call last): File "/usr/local/lib/python3.10/dist-packages/urllib3/connection.py", line 203, in _new_conn sock = connection.create_connection( File "/usr/local/lib/python3.10/dist-packages/urllib3/util/connection.py", line 85, in create_connection raise err File "/usr/local/lib/python3.10/dist-packages/urllib3/util/connection.py", line 73, in create_connection sock.connect(sa) ConnectionRefusedError: [Errno 111] Connection refused The above exception was the direct cause of the following exception: Traceback (most recent call last): File "/usr/local/lib/python3.10/dist-packages/urllib3/connectionpool.py", line 791, in urlopen response = self._make_request( File "/usr/local/lib/python3.10/dist-packages/urllib3/connectionpool.py", line 497, in _make_request conn.request( File "/usr/local/lib/python3.10/dist-packages/urllib3/connection.py", line 395, in request self.endheaders() File "/usr/lib/python3.10/http/client.py", line 1278, in endheaders self._send_output(message_body, encode_chunked=encode_chunked) File "/usr/lib/python3.10/http/client.py", line 1038, in _send_output self.send(msg) File "/usr/lib/python3.10/http/client.py", line 976, in send self.connect() File "/usr/local/lib/python3.10/dist-packages/urllib3/connection.py", line 243, in connect self.sock = self._new_conn() File "/usr/local/lib/python3.10/dist-packages/urllib3/connection.py", line 218, in _new_conn raise NewConnectionError( urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused The above exception was the direct cause of the following exception: Traceback (most recent call last): File "/usr/local/lib/python3.10/dist-packages/requests/adapters.py", line 486, in send resp = conn.urlopen( File "/usr/local/lib/python3.10/dist-packages/urllib3/connectionpool.py", line 845, in urlopen retries = retries.increment( File "/usr/local/lib/python3.10/dist-packages/urllib3/util/retry.py", line 515, in increment raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='172.16.1.2', port=50070): Max retries exceeded with url: /webhdfs/v1/somefilewithrandomname222?op=CREATE&overwrite=true (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/ClickHouse/tests/integration/helpers/cluster.py", line 2565, in wait_hdfs_to_start self.hdfs_api.write_data("/somefilewithrandomname222", "1") File "/ClickHouse/tests/integration/helpers/hdfs_api.py", line 211, in write_data response = self.req_wrapper( File "/ClickHouse/tests/integration/helpers/hdfs_api.py", line 132, in req_wrapper response_data = func(**kwargs) File "/usr/local/lib/python3.10/dist-packages/requests/api.py", line 130, in put return request("put", url, data=data, **kwargs) File "/usr/local/lib/python3.10/dist-packages/requests/api.py", line 59, in request return session.request(method=method, url=url, **kwargs) File "/usr/local/lib/python3.10/dist-packages/requests/sessions.py", line 589, in request resp = self.send(prep, **send_kwargs) File "/usr/local/lib/python3.10/dist-packages/requests/sessions.py", line 703, in send r = adapter.send(request, **kwargs) File "/usr/local/lib/python3.10/dist-packages/requests/adapters.py", line 519, in send raise ConnectionError(e, request=request) requests.exceptions.ConnectionError: HTTPConnectionPool(host='172.16.1.2', port=50070): Max retries exceeded with url: /webhdfs/v1/somefilewithrandomname222?op=CREATE&overwrite=true (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 2025-04-04 18:21:03 [ 632 ] DEBUG : write_data protocol:http host:hdfs1 port:50070 path: /somefilewithrandomname222 user:root, principal:None (hdfs_api.py:194, write_data) 2025-04-04 18:21:03 [ 632 ] DEBUG : CALL: {'url': 'http://172.16.1.2:50070/webhdfs/v1/somefilewithrandomname222?op=CREATE', 'allow_redirects': False, 'headers': {'host': '172.16.1.2'}, 'params': {'overwrite': 'true'}, 'verify': False, 'auth': None} (hdfs_api.py:131, req_wrapper) 2025-04-04 18:21:03 [ 632 ] DEBUG : Starting new HTTP connection (1): 172.16.1.2:50070 (connectionpool.py:245, _new_conn) 2025-04-04 18:21:03 [ 632 ] ERROR : Can't connect to HDFS or preparations are not done yet HTTPConnectionPool(host='172.16.1.2', port=50070): Max retries exceeded with url: /webhdfs/v1/somefilewithrandomname222?op=CREATE&overwrite=true (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) (cluster.py:2572, wait_hdfs_to_start) Traceback (most recent call last): File "/usr/local/lib/python3.10/dist-packages/urllib3/connection.py", line 203, in _new_conn sock = connection.create_connection( File "/usr/local/lib/python3.10/dist-packages/urllib3/util/connection.py", line 85, in create_connection raise err File "/usr/local/lib/python3.10/dist-packages/urllib3/util/connection.py", line 73, in create_connection sock.connect(sa) ConnectionRefusedError: [Errno 111] Connection refused The above exception was the direct cause of the following exception: Traceback (most recent call last): File "/usr/local/lib/python3.10/dist-packages/urllib3/connectionpool.py", line 791, in urlopen response = self._make_request( File "/usr/local/lib/python3.10/dist-packages/urllib3/connectionpool.py", line 497, in _make_request conn.request( File "/usr/local/lib/python3.10/dist-packages/urllib3/connection.py", line 395, in request self.endheaders() File "/usr/lib/python3.10/http/client.py", line 1278, in endheaders self._send_output(message_body, encode_chunked=encode_chunked) File "/usr/lib/python3.10/http/client.py", line 1038, in _send_output self.send(msg) File "/usr/lib/python3.10/http/client.py", line 976, in send self.connect() File "/usr/local/lib/python3.10/dist-packages/urllib3/connection.py", line 243, in connect self.sock = self._new_conn() File "/usr/local/lib/python3.10/dist-packages/urllib3/connection.py", line 218, in _new_conn raise NewConnectionError( urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused The above exception was the direct cause of the following exception: Traceback (most recent call last): File "/usr/local/lib/python3.10/dist-packages/requests/adapters.py", line 486, in send resp = conn.urlopen( File "/usr/local/lib/python3.10/dist-packages/urllib3/connectionpool.py", line 845, in urlopen retries = retries.increment( File "/usr/local/lib/python3.10/dist-packages/urllib3/util/retry.py", line 515, in increment raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='172.16.1.2', port=50070): Max retries exceeded with url: /webhdfs/v1/somefilewithrandomname222?op=CREATE&overwrite=true (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/ClickHouse/tests/integration/helpers/cluster.py", line 2565, in wait_hdfs_to_start self.hdfs_api.write_data("/somefilewithrandomname222", "1") File "/ClickHouse/tests/integration/helpers/hdfs_api.py", line 211, in write_data response = self.req_wrapper( File "/ClickHouse/tests/integration/helpers/hdfs_api.py", line 132, in req_wrapper response_data = func(**kwargs) File "/usr/local/lib/python3.10/dist-packages/requests/api.py", line 130, in put return request("put", url, data=data, **kwargs) File "/usr/local/lib/python3.10/dist-packages/requests/api.py", line 59, in request return session.request(method=method, url=url, **kwargs) File "/usr/local/lib/python3.10/dist-packages/requests/sessions.py", line 589, in request resp = self.send(prep, **send_kwargs) File "/usr/local/lib/python3.10/dist-packages/requests/sessions.py", line 703, in send r = adapter.send(request, **kwargs) File "/usr/local/lib/python3.10/dist-packages/requests/adapters.py", line 519, in send raise ConnectionError(e, request=request) requests.exceptions.ConnectionError: HTTPConnectionPool(host='172.16.1.2', port=50070): Max retries exceeded with url: /webhdfs/v1/somefilewithrandomname222?op=CREATE&overwrite=true (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 2025-04-04 18:21:04 [ 632 ] DEBUG : write_data protocol:http host:hdfs1 port:50070 path: /somefilewithrandomname222 user:root, principal:None (hdfs_api.py:194, write_data) 2025-04-04 18:21:04 [ 632 ] DEBUG : CALL: {'url': 'http://172.16.1.2:50070/webhdfs/v1/somefilewithrandomname222?op=CREATE', 'allow_redirects': False, 'headers': {'host': '172.16.1.2'}, 'params': {'overwrite': 'true'}, 'verify': False, 'auth': None} (hdfs_api.py:131, req_wrapper) 2025-04-04 18:21:04 [ 632 ] DEBUG : Starting new HTTP connection (1): 172.16.1.2:50070 (connectionpool.py:245, _new_conn) 2025-04-04 18:21:04 [ 632 ] ERROR : Can't connect to HDFS or preparations are not done yet HTTPConnectionPool(host='172.16.1.2', port=50070): Max retries exceeded with url: /webhdfs/v1/somefilewithrandomname222?op=CREATE&overwrite=true (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) (cluster.py:2572, wait_hdfs_to_start) Traceback (most recent call last): File "/usr/local/lib/python3.10/dist-packages/urllib3/connection.py", line 203, in _new_conn sock = connection.create_connection( File "/usr/local/lib/python3.10/dist-packages/urllib3/util/connection.py", line 85, in create_connection raise err File "/usr/local/lib/python3.10/dist-packages/urllib3/util/connection.py", line 73, in create_connection sock.connect(sa) ConnectionRefusedError: [Errno 111] Connection refused The above exception was the direct cause of the following exception: Traceback (most recent call last): File "/usr/local/lib/python3.10/dist-packages/urllib3/connectionpool.py", line 791, in urlopen response = self._make_request( File "/usr/local/lib/python3.10/dist-packages/urllib3/connectionpool.py", line 497, in _make_request conn.request( File "/usr/local/lib/python3.10/dist-packages/urllib3/connection.py", line 395, in request self.endheaders() File "/usr/lib/python3.10/http/client.py", line 1278, in endheaders self._send_output(message_body, encode_chunked=encode_chunked) File "/usr/lib/python3.10/http/client.py", line 1038, in _send_output self.send(msg) File "/usr/lib/python3.10/http/client.py", line 976, in send self.connect() File "/usr/local/lib/python3.10/dist-packages/urllib3/connection.py", line 243, in connect self.sock = self._new_conn() File "/usr/local/lib/python3.10/dist-packages/urllib3/connection.py", line 218, in _new_conn raise NewConnectionError( urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused The above exception was the direct cause of the following exception: Traceback (most recent call last): File "/usr/local/lib/python3.10/dist-packages/requests/adapters.py", line 486, in send resp = conn.urlopen( File "/usr/local/lib/python3.10/dist-packages/urllib3/connectionpool.py", line 845, in urlopen retries = retries.increment( File "/usr/local/lib/python3.10/dist-packages/urllib3/util/retry.py", line 515, in increment raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='172.16.1.2', port=50070): Max retries exceeded with url: /webhdfs/v1/somefilewithrandomname222?op=CREATE&overwrite=true (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/ClickHouse/tests/integration/helpers/cluster.py", line 2565, in wait_hdfs_to_start self.hdfs_api.write_data("/somefilewithrandomname222", "1") File "/ClickHouse/tests/integration/helpers/hdfs_api.py", line 211, in write_data response = self.req_wrapper( File "/ClickHouse/tests/integration/helpers/hdfs_api.py", line 132, in req_wrapper response_data = func(**kwargs) File "/usr/local/lib/python3.10/dist-packages/requests/api.py", line 130, in put return request("put", url, data=data, **kwargs) File "/usr/local/lib/python3.10/dist-packages/requests/api.py", line 59, in request return session.request(method=method, url=url, **kwargs) File "/usr/local/lib/python3.10/dist-packages/requests/sessions.py", line 589, in request resp = self.send(prep, **send_kwargs) File "/usr/local/lib/python3.10/dist-packages/requests/sessions.py", line 703, in send r = adapter.send(request, **kwargs) File "/usr/local/lib/python3.10/dist-packages/requests/adapters.py", line 519, in send raise ConnectionError(e, request=request) requests.exceptions.ConnectionError: HTTPConnectionPool(host='172.16.1.2', port=50070): Max retries exceeded with url: /webhdfs/v1/somefilewithrandomname222?op=CREATE&overwrite=true (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 2025-04-04 18:21:05 [ 632 ] DEBUG : write_data protocol:http host:hdfs1 port:50070 path: /somefilewithrandomname222 user:root, principal:None (hdfs_api.py:194, write_data) 2025-04-04 18:21:05 [ 632 ] DEBUG : CALL: {'url': 'http://172.16.1.2:50070/webhdfs/v1/somefilewithrandomname222?op=CREATE', 'allow_redirects': False, 'headers': {'host': '172.16.1.2'}, 'params': {'overwrite': 'true'}, 'verify': False, 'auth': None} (hdfs_api.py:131, req_wrapper) 2025-04-04 18:21:05 [ 632 ] DEBUG : Starting new HTTP connection (1): 172.16.1.2:50070 (connectionpool.py:245, _new_conn) 2025-04-04 18:21:05 [ 632 ] ERROR : Can't connect to HDFS or preparations are not done yet HTTPConnectionPool(host='172.16.1.2', port=50070): Max retries exceeded with url: /webhdfs/v1/somefilewithrandomname222?op=CREATE&overwrite=true (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) (cluster.py:2572, wait_hdfs_to_start) Traceback (most recent call last): File "/usr/local/lib/python3.10/dist-packages/urllib3/connection.py", line 203, in _new_conn sock = connection.create_connection( File "/usr/local/lib/python3.10/dist-packages/urllib3/util/connection.py", line 85, in create_connection raise err File "/usr/local/lib/python3.10/dist-packages/urllib3/util/connection.py", line 73, in create_connection sock.connect(sa) ConnectionRefusedError: [Errno 111] Connection refused The above exception was the direct cause of the following exception: Traceback (most recent call last): File "/usr/local/lib/python3.10/dist-packages/urllib3/connectionpool.py", line 791, in urlopen response = self._make_request( File "/usr/local/lib/python3.10/dist-packages/urllib3/connectionpool.py", line 497, in _make_request conn.request( File "/usr/local/lib/python3.10/dist-packages/urllib3/connection.py", line 395, in request self.endheaders() File "/usr/lib/python3.10/http/client.py", line 1278, in endheaders self._send_output(message_body, encode_chunked=encode_chunked) File "/usr/lib/python3.10/http/client.py", line 1038, in _send_output self.send(msg) File "/usr/lib/python3.10/http/client.py", line 976, in send self.connect() File "/usr/local/lib/python3.10/dist-packages/urllib3/connection.py", line 243, in connect self.sock = self._new_conn() File "/usr/local/lib/python3.10/dist-packages/urllib3/connection.py", line 218, in _new_conn raise NewConnectionError( urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused The above exception was the direct cause of the following exception: Traceback (most recent call last): File "/usr/local/lib/python3.10/dist-packages/requests/adapters.py", line 486, in send resp = conn.urlopen( File "/usr/local/lib/python3.10/dist-packages/urllib3/connectionpool.py", line 845, in urlopen retries = retries.increment( File "/usr/local/lib/python3.10/dist-packages/urllib3/util/retry.py", line 515, in increment raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='172.16.1.2', port=50070): Max retries exceeded with url: /webhdfs/v1/somefilewithrandomname222?op=CREATE&overwrite=true (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/ClickHouse/tests/integration/helpers/cluster.py", line 2565, in wait_hdfs_to_start self.hdfs_api.write_data("/somefilewithrandomname222", "1") File "/ClickHouse/tests/integration/helpers/hdfs_api.py", line 211, in write_data response = self.req_wrapper( File "/ClickHouse/tests/integration/helpers/hdfs_api.py", line 132, in req_wrapper response_data = func(**kwargs) File "/usr/local/lib/python3.10/dist-packages/requests/api.py", line 130, in put return request("put", url, data=data, **kwargs) File "/usr/local/lib/python3.10/dist-packages/requests/api.py", line 59, in request return session.request(method=method, url=url, **kwargs) File "/usr/local/lib/python3.10/dist-packages/requests/sessions.py", line 589, in request resp = self.send(prep, **send_kwargs) File "/usr/local/lib/python3.10/dist-packages/requests/sessions.py", line 703, in send r = adapter.send(request, **kwargs) File "/usr/local/lib/python3.10/dist-packages/requests/adapters.py", line 519, in send raise ConnectionError(e, request=request) requests.exceptions.ConnectionError: HTTPConnectionPool(host='172.16.1.2', port=50070): Max retries exceeded with url: /webhdfs/v1/somefilewithrandomname222?op=CREATE&overwrite=true (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 2025-04-04 18:21:06 [ 632 ] DEBUG : write_data protocol:http host:hdfs1 port:50070 path: /somefilewithrandomname222 user:root, principal:None (hdfs_api.py:194, write_data) 2025-04-04 18:21:06 [ 632 ] DEBUG : CALL: {'url': 'http://172.16.1.2:50070/webhdfs/v1/somefilewithrandomname222?op=CREATE', 'allow_redirects': False, 'headers': {'host': '172.16.1.2'}, 'params': {'overwrite': 'true'}, 'verify': False, 'auth': None} (hdfs_api.py:131, req_wrapper) 2025-04-04 18:21:06 [ 632 ] DEBUG : Starting new HTTP connection (1): 172.16.1.2:50070 (connectionpool.py:245, _new_conn) 2025-04-04 18:21:06 [ 632 ] ERROR : Can't connect to HDFS or preparations are not done yet HTTPConnectionPool(host='172.16.1.2', port=50070): Max retries exceeded with url: /webhdfs/v1/somefilewithrandomname222?op=CREATE&overwrite=true (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) (cluster.py:2572, wait_hdfs_to_start) Traceback (most recent call last): File "/usr/local/lib/python3.10/dist-packages/urllib3/connection.py", line 203, in _new_conn sock = connection.create_connection( File "/usr/local/lib/python3.10/dist-packages/urllib3/util/connection.py", line 85, in create_connection raise err File "/usr/local/lib/python3.10/dist-packages/urllib3/util/connection.py", line 73, in create_connection sock.connect(sa) ConnectionRefusedError: [Errno 111] Connection refused The above exception was the direct cause of the following exception: Traceback (most recent call last): File "/usr/local/lib/python3.10/dist-packages/urllib3/connectionpool.py", line 791, in urlopen response = self._make_request( File "/usr/local/lib/python3.10/dist-packages/urllib3/connectionpool.py", line 497, in _make_request conn.request( File "/usr/local/lib/python3.10/dist-packages/urllib3/connection.py", line 395, in request self.endheaders() File "/usr/lib/python3.10/http/client.py", line 1278, in endheaders self._send_output(message_body, encode_chunked=encode_chunked) File "/usr/lib/python3.10/http/client.py", line 1038, in _send_output self.send(msg) File "/usr/lib/python3.10/http/client.py", line 976, in send self.connect() File "/usr/local/lib/python3.10/dist-packages/urllib3/connection.py", line 243, in connect self.sock = self._new_conn() File "/usr/local/lib/python3.10/dist-packages/urllib3/connection.py", line 218, in _new_conn raise NewConnectionError( urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused The above exception was the direct cause of the following exception: Traceback (most recent call last): File "/usr/local/lib/python3.10/dist-packages/requests/adapters.py", line 486, in send resp = conn.urlopen( File "/usr/local/lib/python3.10/dist-packages/urllib3/connectionpool.py", line 845, in urlopen retries = retries.increment( File "/usr/local/lib/python3.10/dist-packages/urllib3/util/retry.py", line 515, in increment raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='172.16.1.2', port=50070): Max retries exceeded with url: /webhdfs/v1/somefilewithrandomname222?op=CREATE&overwrite=true (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/ClickHouse/tests/integration/helpers/cluster.py", line 2565, in wait_hdfs_to_start self.hdfs_api.write_data("/somefilewithrandomname222", "1") File "/ClickHouse/tests/integration/helpers/hdfs_api.py", line 211, in write_data response = self.req_wrapper( File "/ClickHouse/tests/integration/helpers/hdfs_api.py", line 132, in req_wrapper response_data = func(**kwargs) File "/usr/local/lib/python3.10/dist-packages/requests/api.py", line 130, in put return request("put", url, data=data, **kwargs) File "/usr/local/lib/python3.10/dist-packages/requests/api.py", line 59, in request return session.request(method=method, url=url, **kwargs) File "/usr/local/lib/python3.10/dist-packages/requests/sessions.py", line 589, in request resp = self.send(prep, **send_kwargs) File "/usr/local/lib/python3.10/dist-packages/requests/sessions.py", line 703, in send r = adapter.send(request, **kwargs) File "/usr/local/lib/python3.10/dist-packages/requests/adapters.py", line 519, in send raise ConnectionError(e, request=request) requests.exceptions.ConnectionError: HTTPConnectionPool(host='172.16.1.2', port=50070): Max retries exceeded with url: /webhdfs/v1/somefilewithrandomname222?op=CREATE&overwrite=true (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 2025-04-04 18:21:07 [ 632 ] DEBUG : write_data protocol:http host:hdfs1 port:50070 path: /somefilewithrandomname222 user:root, principal:None (hdfs_api.py:194, write_data) 2025-04-04 18:21:07 [ 632 ] DEBUG : CALL: {'url': 'http://172.16.1.2:50070/webhdfs/v1/somefilewithrandomname222?op=CREATE', 'allow_redirects': False, 'headers': {'host': '172.16.1.2'}, 'params': {'overwrite': 'true'}, 'verify': False, 'auth': None} (hdfs_api.py:131, req_wrapper) 2025-04-04 18:21:07 [ 632 ] DEBUG : Starting new HTTP connection (1): 172.16.1.2:50070 (connectionpool.py:245, _new_conn) 2025-04-04 18:21:07 [ 632 ] ERROR : Can't connect to HDFS or preparations are not done yet HTTPConnectionPool(host='172.16.1.2', port=50070): Max retries exceeded with url: /webhdfs/v1/somefilewithrandomname222?op=CREATE&overwrite=true (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) (cluster.py:2572, wait_hdfs_to_start) Traceback (most recent call last): File "/usr/local/lib/python3.10/dist-packages/urllib3/connection.py", line 203, in _new_conn sock = connection.create_connection( File "/usr/local/lib/python3.10/dist-packages/urllib3/util/connection.py", line 85, in create_connection raise err File "/usr/local/lib/python3.10/dist-packages/urllib3/util/connection.py", line 73, in create_connection sock.connect(sa) ConnectionRefusedError: [Errno 111] Connection refused The above exception was the direct cause of the following exception: Traceback (most recent call last): File "/usr/local/lib/python3.10/dist-packages/urllib3/connectionpool.py", line 791, in urlopen response = self._make_request( File "/usr/local/lib/python3.10/dist-packages/urllib3/connectionpool.py", line 497, in _make_request conn.request( File "/usr/local/lib/python3.10/dist-packages/urllib3/connection.py", line 395, in request self.endheaders() File "/usr/lib/python3.10/http/client.py", line 1278, in endheaders self._send_output(message_body, encode_chunked=encode_chunked) File "/usr/lib/python3.10/http/client.py", line 1038, in _send_output self.send(msg) File "/usr/lib/python3.10/http/client.py", line 976, in send self.connect() File "/usr/local/lib/python3.10/dist-packages/urllib3/connection.py", line 243, in connect self.sock = self._new_conn() File "/usr/local/lib/python3.10/dist-packages/urllib3/connection.py", line 218, in _new_conn raise NewConnectionError( urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused The above exception was the direct cause of the following exception: Traceback (most recent call last): File "/usr/local/lib/python3.10/dist-packages/requests/adapters.py", line 486, in send resp = conn.urlopen( File "/usr/local/lib/python3.10/dist-packages/urllib3/connectionpool.py", line 845, in urlopen retries = retries.increment( File "/usr/local/lib/python3.10/dist-packages/urllib3/util/retry.py", line 515, in increment raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='172.16.1.2', port=50070): Max retries exceeded with url: /webhdfs/v1/somefilewithrandomname222?op=CREATE&overwrite=true (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/ClickHouse/tests/integration/helpers/cluster.py", line 2565, in wait_hdfs_to_start self.hdfs_api.write_data("/somefilewithrandomname222", "1") File "/ClickHouse/tests/integration/helpers/hdfs_api.py", line 211, in write_data response = self.req_wrapper( File "/ClickHouse/tests/integration/helpers/hdfs_api.py", line 132, in req_wrapper response_data = func(**kwargs) File "/usr/local/lib/python3.10/dist-packages/requests/api.py", line 130, in put return request("put", url, data=data, **kwargs) File "/usr/local/lib/python3.10/dist-packages/requests/api.py", line 59, in request return session.request(method=method, url=url, **kwargs) File "/usr/local/lib/python3.10/dist-packages/requests/sessions.py", line 589, in request resp = self.send(prep, **send_kwargs) File "/usr/local/lib/python3.10/dist-packages/requests/sessions.py", line 703, in send r = adapter.send(request, **kwargs) File "/usr/local/lib/python3.10/dist-packages/requests/adapters.py", line 519, in send raise ConnectionError(e, request=request) requests.exceptions.ConnectionError: HTTPConnectionPool(host='172.16.1.2', port=50070): Max retries exceeded with url: /webhdfs/v1/somefilewithrandomname222?op=CREATE&overwrite=true (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 2025-04-04 18:21:08 [ 632 ] DEBUG : write_data protocol:http host:hdfs1 port:50070 path: /somefilewithrandomname222 user:root, principal:None (hdfs_api.py:194, write_data) 2025-04-04 18:21:08 [ 632 ] DEBUG : CALL: {'url': 'http://172.16.1.2:50070/webhdfs/v1/somefilewithrandomname222?op=CREATE', 'allow_redirects': False, 'headers': {'host': '172.16.1.2'}, 'params': {'overwrite': 'true'}, 'verify': False, 'auth': None} (hdfs_api.py:131, req_wrapper) 2025-04-04 18:21:08 [ 632 ] DEBUG : Starting new HTTP connection (1): 172.16.1.2:50070 (connectionpool.py:245, _new_conn) 2025-04-04 18:21:09 [ 632 ] DEBUG : http://172.16.1.2:50070 "PUT /webhdfs/v1/somefilewithrandomname222?op=CREATE&overwrite=true HTTP/1.1" 403 None (connectionpool.py:547, _make_request) 2025-04-04 18:21:09 [ 632 ] DEBUG : response_data:b'{"RemoteException":{"exception":"RetriableException","javaClassName":"org.apache.hadoop.ipc.RetriableException","message":"Namenode is in startup mode"}}' headers:{'Cache-Control': 'no-cache', 'Expires': 'Fri, 04 Apr 2025 18:21:08 GMT, Fri, 04 Apr 2025 18:21:08 GMT', 'Date': 'Fri, 04 Apr 2025 18:21:08 GMT, Fri, 04 Apr 2025 18:21:08 GMT', 'Pragma': 'no-cache, no-cache', 'Content-Type': 'application/json', 'Transfer-Encoding': 'chunked', 'Server': 'Jetty(6.1.26)'} (hdfs_api.py:133, req_wrapper) 2025-04-04 18:21:09 [ 632 ] ERROR : unexpected response_data.status_code 403 != 307 (hdfs_api.py:139, req_wrapper) 2025-04-04 18:21:10 [ 632 ] DEBUG : CALL: {'url': 'http://172.16.1.2:50070/webhdfs/v1/somefilewithrandomname222?op=CREATE', 'allow_redirects': False, 'headers': {'host': '172.16.1.2'}, 'params': {'overwrite': 'true'}, 'verify': False, 'auth': None} (hdfs_api.py:131, req_wrapper) 2025-04-04 18:21:10 [ 632 ] DEBUG : Starting new HTTP connection (1): 172.16.1.2:50070 (connectionpool.py:245, _new_conn) 2025-04-04 18:21:10 [ 632 ] DEBUG : http://172.16.1.2:50070 "PUT /webhdfs/v1/somefilewithrandomname222?op=CREATE&overwrite=true HTTP/1.1" 403 None (connectionpool.py:547, _make_request) 2025-04-04 18:21:10 [ 632 ] DEBUG : response_data:b'{"RemoteException":{"exception":"RetriableException","javaClassName":"org.apache.hadoop.ipc.RetriableException","message":"Namenode is in startup mode"}}' headers:{'Cache-Control': 'no-cache', 'Expires': 'Fri, 04 Apr 2025 18:21:10 GMT, Fri, 04 Apr 2025 18:21:10 GMT', 'Date': 'Fri, 04 Apr 2025 18:21:10 GMT, Fri, 04 Apr 2025 18:21:10 GMT', 'Pragma': 'no-cache, no-cache', 'Content-Type': 'application/json', 'Transfer-Encoding': 'chunked', 'Server': 'Jetty(6.1.26)'} (hdfs_api.py:133, req_wrapper) 2025-04-04 18:21:10 [ 632 ] ERROR : unexpected response_data.status_code 403 != 307 (hdfs_api.py:139, req_wrapper) 2025-04-04 18:21:11 [ 632 ] ERROR : Can't connect to HDFS or preparations are not done yet 403 Client Error: Forbidden for url: http://172.16.1.2:50070/webhdfs/v1/somefilewithrandomname222?op=CREATE&overwrite=true (cluster.py:2572, wait_hdfs_to_start) Traceback (most recent call last): File "/ClickHouse/tests/integration/helpers/cluster.py", line 2565, in wait_hdfs_to_start self.hdfs_api.write_data("/somefilewithrandomname222", "1") File "/ClickHouse/tests/integration/helpers/hdfs_api.py", line 211, in write_data response = self.req_wrapper( File "/ClickHouse/tests/integration/helpers/hdfs_api.py", line 143, in req_wrapper response_data.raise_for_status() File "/usr/local/lib/python3.10/dist-packages/requests/models.py", line 1021, in raise_for_status raise HTTPError(http_error_msg, response=self) requests.exceptions.HTTPError: 403 Client Error: Forbidden for url: http://172.16.1.2:50070/webhdfs/v1/somefilewithrandomname222?op=CREATE&overwrite=true 2025-04-04 18:21:12 [ 632 ] DEBUG : write_data protocol:http host:hdfs1 port:50070 path: /somefilewithrandomname222 user:root, principal:None (hdfs_api.py:194, write_data) 2025-04-04 18:21:12 [ 632 ] DEBUG : CALL: {'url': 'http://172.16.1.2:50070/webhdfs/v1/somefilewithrandomname222?op=CREATE', 'allow_redirects': False, 'headers': {'host': '172.16.1.2'}, 'params': {'overwrite': 'true'}, 'verify': False, 'auth': None} (hdfs_api.py:131, req_wrapper) 2025-04-04 18:21:12 [ 632 ] DEBUG : Starting new HTTP connection (1): 172.16.1.2:50070 (connectionpool.py:245, _new_conn) 2025-04-04 18:21:12 [ 632 ] DEBUG : http://172.16.1.2:50070 "PUT /webhdfs/v1/somefilewithrandomname222?op=CREATE&overwrite=true HTTP/1.1" 403 None (connectionpool.py:547, _make_request) 2025-04-04 18:21:12 [ 632 ] DEBUG : response_data:b'{"RemoteException":{"exception":"RetriableException","javaClassName":"org.apache.hadoop.ipc.RetriableException","message":"Namenode is in startup mode"}}' headers:{'Cache-Control': 'no-cache', 'Expires': 'Fri, 04 Apr 2025 18:21:12 GMT, Fri, 04 Apr 2025 18:21:12 GMT', 'Date': 'Fri, 04 Apr 2025 18:21:12 GMT, Fri, 04 Apr 2025 18:21:12 GMT', 'Pragma': 'no-cache, no-cache', 'Content-Type': 'application/json', 'Transfer-Encoding': 'chunked', 'Server': 'Jetty(6.1.26)'} (hdfs_api.py:133, req_wrapper) 2025-04-04 18:21:12 [ 632 ] ERROR : unexpected response_data.status_code 403 != 307 (hdfs_api.py:139, req_wrapper) 2025-04-04 18:21:13 [ 632 ] DEBUG : CALL: {'url': 'http://172.16.1.2:50070/webhdfs/v1/somefilewithrandomname222?op=CREATE', 'allow_redirects': False, 'headers': {'host': '172.16.1.2'}, 'params': {'overwrite': 'true'}, 'verify': False, 'auth': None} (hdfs_api.py:131, req_wrapper) 2025-04-04 18:21:13 [ 632 ] DEBUG : Starting new HTTP connection (1): 172.16.1.2:50070 (connectionpool.py:245, _new_conn) 2025-04-04 18:21:13 [ 632 ] DEBUG : http://172.16.1.2:50070 "PUT /webhdfs/v1/somefilewithrandomname222?op=CREATE&overwrite=true HTTP/1.1" 403 None (connectionpool.py:547, _make_request) 2025-04-04 18:21:13 [ 632 ] DEBUG : response_data:b'{"RemoteException":{"exception":"RetriableException","javaClassName":"org.apache.hadoop.ipc.RetriableException","message":"Namenode is in startup mode"}}' headers:{'Cache-Control': 'no-cache', 'Expires': 'Fri, 04 Apr 2025 18:21:13 GMT, Fri, 04 Apr 2025 18:21:13 GMT', 'Date': 'Fri, 04 Apr 2025 18:21:13 GMT, Fri, 04 Apr 2025 18:21:13 GMT', 'Pragma': 'no-cache, no-cache', 'Content-Type': 'application/json', 'Transfer-Encoding': 'chunked', 'Server': 'Jetty(6.1.26)'} (hdfs_api.py:133, req_wrapper) 2025-04-04 18:21:13 [ 632 ] ERROR : unexpected response_data.status_code 403 != 307 (hdfs_api.py:139, req_wrapper) 2025-04-04 18:21:14 [ 632 ] ERROR : Can't connect to HDFS or preparations are not done yet 403 Client Error: Forbidden for url: http://172.16.1.2:50070/webhdfs/v1/somefilewithrandomname222?op=CREATE&overwrite=true (cluster.py:2572, wait_hdfs_to_start) Traceback (most recent call last): File "/ClickHouse/tests/integration/helpers/cluster.py", line 2565, in wait_hdfs_to_start self.hdfs_api.write_data("/somefilewithrandomname222", "1") File "/ClickHouse/tests/integration/helpers/hdfs_api.py", line 211, in write_data response = self.req_wrapper( File "/ClickHouse/tests/integration/helpers/hdfs_api.py", line 143, in req_wrapper response_data.raise_for_status() File "/usr/local/lib/python3.10/dist-packages/requests/models.py", line 1021, in raise_for_status raise HTTPError(http_error_msg, response=self) requests.exceptions.HTTPError: 403 Client Error: Forbidden for url: http://172.16.1.2:50070/webhdfs/v1/somefilewithrandomname222?op=CREATE&overwrite=true 2025-04-04 18:21:15 [ 632 ] DEBUG : write_data protocol:http host:hdfs1 port:50070 path: /somefilewithrandomname222 user:root, principal:None (hdfs_api.py:194, write_data) 2025-04-04 18:21:15 [ 632 ] DEBUG : CALL: {'url': 'http://172.16.1.2:50070/webhdfs/v1/somefilewithrandomname222?op=CREATE', 'allow_redirects': False, 'headers': {'host': '172.16.1.2'}, 'params': {'overwrite': 'true'}, 'verify': False, 'auth': None} (hdfs_api.py:131, req_wrapper) 2025-04-04 18:21:15 [ 632 ] DEBUG : Starting new HTTP connection (1): 172.16.1.2:50070 (connectionpool.py:245, _new_conn) 2025-04-04 18:21:15 [ 632 ] DEBUG : http://172.16.1.2:50070 "PUT /webhdfs/v1/somefilewithrandomname222?op=CREATE&overwrite=true HTTP/1.1" 403 None (connectionpool.py:547, _make_request) 2025-04-04 18:21:15 [ 632 ] DEBUG : response_data:b'{"RemoteException":{"exception":"RetriableException","javaClassName":"org.apache.hadoop.ipc.RetriableException","message":"Namenode is in startup mode"}}' headers:{'Cache-Control': 'no-cache', 'Expires': 'Fri, 04 Apr 2025 18:21:15 GMT, Fri, 04 Apr 2025 18:21:15 GMT', 'Date': 'Fri, 04 Apr 2025 18:21:15 GMT, Fri, 04 Apr 2025 18:21:15 GMT', 'Pragma': 'no-cache, no-cache', 'Content-Type': 'application/json', 'Transfer-Encoding': 'chunked', 'Server': 'Jetty(6.1.26)'} (hdfs_api.py:133, req_wrapper) 2025-04-04 18:21:15 [ 632 ] ERROR : unexpected response_data.status_code 403 != 307 (hdfs_api.py:139, req_wrapper) 2025-04-04 18:21:16 [ 632 ] DEBUG : CALL: {'url': 'http://172.16.1.2:50070/webhdfs/v1/somefilewithrandomname222?op=CREATE', 'allow_redirects': False, 'headers': {'host': '172.16.1.2'}, 'params': {'overwrite': 'true'}, 'verify': False, 'auth': None} (hdfs_api.py:131, req_wrapper) 2025-04-04 18:21:16 [ 632 ] DEBUG : Starting new HTTP connection (1): 172.16.1.2:50070 (connectionpool.py:245, _new_conn) 2025-04-04 18:21:16 [ 632 ] DEBUG : http://172.16.1.2:50070 "PUT /webhdfs/v1/somefilewithrandomname222?op=CREATE&overwrite=true HTTP/1.1" 403 None (connectionpool.py:547, _make_request) 2025-04-04 18:21:16 [ 632 ] DEBUG : response_data:b'{"RemoteException":{"exception":"RetriableException","javaClassName":"org.apache.hadoop.ipc.RetriableException","message":"Namenode is in startup mode"}}' headers:{'Cache-Control': 'no-cache', 'Expires': 'Fri, 04 Apr 2025 18:21:16 GMT, Fri, 04 Apr 2025 18:21:16 GMT', 'Date': 'Fri, 04 Apr 2025 18:21:16 GMT, Fri, 04 Apr 2025 18:21:16 GMT', 'Pragma': 'no-cache, no-cache', 'Content-Type': 'application/json', 'Transfer-Encoding': 'chunked', 'Server': 'Jetty(6.1.26)'} (hdfs_api.py:133, req_wrapper) 2025-04-04 18:21:16 [ 632 ] ERROR : unexpected response_data.status_code 403 != 307 (hdfs_api.py:139, req_wrapper) 2025-04-04 18:21:17 [ 632 ] ERROR : Can't connect to HDFS or preparations are not done yet 403 Client Error: Forbidden for url: http://172.16.1.2:50070/webhdfs/v1/somefilewithrandomname222?op=CREATE&overwrite=true (cluster.py:2572, wait_hdfs_to_start) Traceback (most recent call last): File "/ClickHouse/tests/integration/helpers/cluster.py", line 2565, in wait_hdfs_to_start self.hdfs_api.write_data("/somefilewithrandomname222", "1") File "/ClickHouse/tests/integration/helpers/hdfs_api.py", line 211, in write_data response = self.req_wrapper( File "/ClickHouse/tests/integration/helpers/hdfs_api.py", line 143, in req_wrapper response_data.raise_for_status() File "/usr/local/lib/python3.10/dist-packages/requests/models.py", line 1021, in raise_for_status raise HTTPError(http_error_msg, response=self) requests.exceptions.HTTPError: 403 Client Error: Forbidden for url: http://172.16.1.2:50070/webhdfs/v1/somefilewithrandomname222?op=CREATE&overwrite=true 2025-04-04 18:21:18 [ 632 ] DEBUG : write_data protocol:http host:hdfs1 port:50070 path: /somefilewithrandomname222 user:root, principal:None (hdfs_api.py:194, write_data) 2025-04-04 18:21:18 [ 632 ] DEBUG : CALL: {'url': 'http://172.16.1.2:50070/webhdfs/v1/somefilewithrandomname222?op=CREATE', 'allow_redirects': False, 'headers': {'host': '172.16.1.2'}, 'params': {'overwrite': 'true'}, 'verify': False, 'auth': None} (hdfs_api.py:131, req_wrapper) 2025-04-04 18:21:18 [ 632 ] DEBUG : Starting new HTTP connection (1): 172.16.1.2:50070 (connectionpool.py:245, _new_conn) 2025-04-04 18:21:18 [ 632 ] DEBUG : http://172.16.1.2:50070 "PUT /webhdfs/v1/somefilewithrandomname222?op=CREATE&overwrite=true HTTP/1.1" 403 None (connectionpool.py:547, _make_request) 2025-04-04 18:21:18 [ 632 ] DEBUG : response_data:b'{"RemoteException":{"exception":"IOException","javaClassName":"java.io.IOException","message":"Failed to find datanode, suggest to check cluster health."}}' headers:{'Cache-Control': 'no-cache', 'Expires': 'Fri, 04 Apr 2025 18:21:18 GMT, Fri, 04 Apr 2025 18:21:18 GMT', 'Date': 'Fri, 04 Apr 2025 18:21:18 GMT, Fri, 04 Apr 2025 18:21:18 GMT', 'Pragma': 'no-cache, no-cache', 'Content-Type': 'application/json', 'Transfer-Encoding': 'chunked', 'Server': 'Jetty(6.1.26)'} (hdfs_api.py:133, req_wrapper) 2025-04-04 18:21:18 [ 632 ] ERROR : unexpected response_data.status_code 403 != 307 (hdfs_api.py:139, req_wrapper) 2025-04-04 18:21:19 [ 632 ] DEBUG : CALL: {'url': 'http://172.16.1.2:50070/webhdfs/v1/somefilewithrandomname222?op=CREATE', 'allow_redirects': False, 'headers': {'host': '172.16.1.2'}, 'params': {'overwrite': 'true'}, 'verify': False, 'auth': None} (hdfs_api.py:131, req_wrapper) 2025-04-04 18:21:19 [ 632 ] DEBUG : Starting new HTTP connection (1): 172.16.1.2:50070 (connectionpool.py:245, _new_conn) 2025-04-04 18:21:19 [ 632 ] DEBUG : http://172.16.1.2:50070 "PUT /webhdfs/v1/somefilewithrandomname222?op=CREATE&overwrite=true HTTP/1.1" 403 None (connectionpool.py:547, _make_request) 2025-04-04 18:21:19 [ 632 ] DEBUG : response_data:b'{"RemoteException":{"exception":"IOException","javaClassName":"java.io.IOException","message":"Failed to find datanode, suggest to check cluster health."}}' headers:{'Cache-Control': 'no-cache', 'Expires': 'Fri, 04 Apr 2025 18:21:19 GMT, Fri, 04 Apr 2025 18:21:19 GMT', 'Date': 'Fri, 04 Apr 2025 18:21:19 GMT, Fri, 04 Apr 2025 18:21:19 GMT', 'Pragma': 'no-cache, no-cache', 'Content-Type': 'application/json', 'Transfer-Encoding': 'chunked', 'Server': 'Jetty(6.1.26)'} (hdfs_api.py:133, req_wrapper) 2025-04-04 18:21:19 [ 632 ] ERROR : unexpected response_data.status_code 403 != 307 (hdfs_api.py:139, req_wrapper) 2025-04-04 18:21:20 [ 632 ] ERROR : Can't connect to HDFS or preparations are not done yet 403 Client Error: Forbidden for url: http://172.16.1.2:50070/webhdfs/v1/somefilewithrandomname222?op=CREATE&overwrite=true (cluster.py:2572, wait_hdfs_to_start) Traceback (most recent call last): File "/ClickHouse/tests/integration/helpers/cluster.py", line 2565, in wait_hdfs_to_start self.hdfs_api.write_data("/somefilewithrandomname222", "1") File "/ClickHouse/tests/integration/helpers/hdfs_api.py", line 211, in write_data response = self.req_wrapper( File "/ClickHouse/tests/integration/helpers/hdfs_api.py", line 143, in req_wrapper response_data.raise_for_status() File "/usr/local/lib/python3.10/dist-packages/requests/models.py", line 1021, in raise_for_status raise HTTPError(http_error_msg, response=self) requests.exceptions.HTTPError: 403 Client Error: Forbidden for url: http://172.16.1.2:50070/webhdfs/v1/somefilewithrandomname222?op=CREATE&overwrite=true 2025-04-04 18:21:21 [ 632 ] DEBUG : write_data protocol:http host:hdfs1 port:50070 path: /somefilewithrandomname222 user:root, principal:None (hdfs_api.py:194, write_data) 2025-04-04 18:21:21 [ 632 ] DEBUG : CALL: {'url': 'http://172.16.1.2:50070/webhdfs/v1/somefilewithrandomname222?op=CREATE', 'allow_redirects': False, 'headers': {'host': '172.16.1.2'}, 'params': {'overwrite': 'true'}, 'verify': False, 'auth': None} (hdfs_api.py:131, req_wrapper) 2025-04-04 18:21:21 [ 632 ] DEBUG : Starting new HTTP connection (1): 172.16.1.2:50070 (connectionpool.py:245, _new_conn) 2025-04-04 18:21:21 [ 632 ] DEBUG : http://172.16.1.2:50070 "PUT /webhdfs/v1/somefilewithrandomname222?op=CREATE&overwrite=true HTTP/1.1" 307 0 (connectionpool.py:547, _make_request) 2025-04-04 18:21:21 [ 632 ] DEBUG : response_data:b'' headers:{'Cache-Control': 'no-cache', 'Expires': 'Fri, 04 Apr 2025 18:21:21 GMT, Fri, 04 Apr 2025 18:21:21 GMT', 'Date': 'Fri, 04 Apr 2025 18:21:21 GMT, Fri, 04 Apr 2025 18:21:21 GMT', 'Pragma': 'no-cache, no-cache', 'Content-Type': 'application/octet-stream', 'Location': 'http://hdfs1:50075/webhdfs/v1/somefilewithrandomname222?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true', 'Content-Length': '0', 'Server': 'Jetty(6.1.26)'} (hdfs_api.py:133, req_wrapper) 2025-04-04 18:21:21 [ 632 ] DEBUG : HDFS api response:{'Cache-Control': 'no-cache', 'Expires': 'Fri, 04 Apr 2025 18:21:21 GMT, Fri, 04 Apr 2025 18:21:21 GMT', 'Date': 'Fri, 04 Apr 2025 18:21:21 GMT, Fri, 04 Apr 2025 18:21:21 GMT', 'Pragma': 'no-cache, no-cache', 'Content-Type': 'application/octet-stream', 'Location': 'http://hdfs1:50075/webhdfs/v1/somefilewithrandomname222?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true', 'Content-Length': '0', 'Server': 'Jetty(6.1.26)'} (hdfs_api.py:228, write_data) 2025-04-04 18:21:21 [ 632 ] DEBUG : CALL: {'url': 'http://172.16.1.2:50075/webhdfs/v1/somefilewithrandomname222?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true', 'data': b'1', 'headers': {'content-type': 'text/plain', 'host': '172.16.1.2'}, 'params': {'file': '/somefilewithrandomname222', 'user.name': 'root'}, 'allow_redirects': False, 'verify': False, 'auth': None} (hdfs_api.py:131, req_wrapper) 2025-04-04 18:21:21 [ 632 ] DEBUG : Starting new HTTP connection (1): 172.16.1.2:50075 (connectionpool.py:245, _new_conn) 2025-04-04 18:21:22 [ 632 ] DEBUG : http://172.16.1.2:50075 "PUT /webhdfs/v1/somefilewithrandomname222?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true&file=%2Fsomefilewithrandomname222&user.name=root HTTP/1.1" 403 None (connectionpool.py:547, _make_request) 2025-04-04 18:21:22 [ 632 ] DEBUG : response_data:b'{"RemoteException":{"exception":"RemoteException","javaClassName":"org.apache.hadoop.ipc.RemoteException","message":"Cannot create file/somefilewithrandomname222. Name node is in safe mode.\\nThe reported blocks 31 has reached the threshold 0.9990 of total blocks 31. The number of live datanodes 1 has reached the minimum number 0. In safe mode extension. Safe mode will be turned off automatically in 28 seconds.\\n\\tat org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkNameNodeSafeMode(FSNamesystem.java:1364)\\n\\tat org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFileInt(FSNamesystem.java:2630)\\n\\tat org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFile(FSNamesystem.java:2519)\\n\\tat org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.create(NameNodeRpcServer.java:566)\\n\\tat org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.create(ClientNamenodeProtocolServerSideTranslatorPB.java:394)\\n\\tat org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)\\n\\tat org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:619)\\n\\tat org.apache.hadoop.ipc.RPC$Server.call(RPC.java:962)\\n\\tat org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2039)\\n\\tat org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2035)\\n\\tat java.security.AccessController.doPrivileged(Native Method)\\n\\tat javax.security.auth.Subject.doAs(Subject.java:415)\\n\\tat org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628)\\n\\tat org.apache.hadoop.ipc.Server$Handler.run(Server.java:2033)\\n"}}' headers:{'Cache-Control': 'no-cache', 'Expires': 'Fri, 04 Apr 2025 18:21:21 GMT, Fri, 04 Apr 2025 18:21:21 GMT', 'Date': 'Fri, 04 Apr 2025 18:21:21 GMT, Fri, 04 Apr 2025 18:21:21 GMT', 'Pragma': 'no-cache, no-cache', 'Content-Type': 'application/json', 'Transfer-Encoding': 'chunked', 'Server': 'Jetty(6.1.26)'} (hdfs_api.py:133, req_wrapper) 2025-04-04 18:21:22 [ 632 ] ERROR : unexpected response_data.status_code 403 != 201 (hdfs_api.py:139, req_wrapper) 2025-04-04 18:21:23 [ 632 ] DEBUG : CALL: {'url': 'http://172.16.1.2:50075/webhdfs/v1/somefilewithrandomname222?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true', 'data': b'1', 'headers': {'content-type': 'text/plain', 'host': '172.16.1.2'}, 'params': {'file': '/somefilewithrandomname222', 'user.name': 'root'}, 'allow_redirects': False, 'verify': False, 'auth': None} (hdfs_api.py:131, req_wrapper) 2025-04-04 18:21:23 [ 632 ] DEBUG : Starting new HTTP connection (1): 172.16.1.2:50075 (connectionpool.py:245, _new_conn) 2025-04-04 18:21:23 [ 632 ] DEBUG : http://172.16.1.2:50075 "PUT /webhdfs/v1/somefilewithrandomname222?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true&file=%2Fsomefilewithrandomname222&user.name=root HTTP/1.1" 403 None (connectionpool.py:547, _make_request) 2025-04-04 18:21:23 [ 632 ] DEBUG : response_data:b'{"RemoteException":{"exception":"RemoteException","javaClassName":"org.apache.hadoop.ipc.RemoteException","message":"Cannot create file/somefilewithrandomname222. Name node is in safe mode.\\nThe reported blocks 31 has reached the threshold 0.9990 of total blocks 31. The number of live datanodes 1 has reached the minimum number 0. In safe mode extension. Safe mode will be turned off automatically in 27 seconds.\\n\\tat org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkNameNodeSafeMode(FSNamesystem.java:1364)\\n\\tat org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFileInt(FSNamesystem.java:2630)\\n\\tat org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFile(FSNamesystem.java:2519)\\n\\tat org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.create(NameNodeRpcServer.java:566)\\n\\tat org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.create(ClientNamenodeProtocolServerSideTranslatorPB.java:394)\\n\\tat org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)\\n\\tat org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:619)\\n\\tat org.apache.hadoop.ipc.RPC$Server.call(RPC.java:962)\\n\\tat org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2039)\\n\\tat org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2035)\\n\\tat java.security.AccessController.doPrivileged(Native Method)\\n\\tat javax.security.auth.Subject.doAs(Subject.java:415)\\n\\tat org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628)\\n\\tat org.apache.hadoop.ipc.Server$Handler.run(Server.java:2033)\\n"}}' headers:{'Cache-Control': 'no-cache', 'Expires': 'Fri, 04 Apr 2025 18:21:23 GMT, Fri, 04 Apr 2025 18:21:23 GMT', 'Date': 'Fri, 04 Apr 2025 18:21:23 GMT, Fri, 04 Apr 2025 18:21:23 GMT', 'Pragma': 'no-cache, no-cache', 'Content-Type': 'application/json', 'Transfer-Encoding': 'chunked', 'Server': 'Jetty(6.1.26)'} (hdfs_api.py:133, req_wrapper) 2025-04-04 18:21:23 [ 632 ] ERROR : unexpected response_data.status_code 403 != 201 (hdfs_api.py:139, req_wrapper) 2025-04-04 18:21:24 [ 632 ] ERROR : Can't connect to HDFS or preparations are not done yet 403 Client Error: Forbidden for url: http://172.16.1.2:50075/webhdfs/v1/somefilewithrandomname222?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true&file=%2Fsomefilewithrandomname222&user.name=root (cluster.py:2572, wait_hdfs_to_start) Traceback (most recent call last): File "/ClickHouse/tests/integration/helpers/cluster.py", line 2565, in wait_hdfs_to_start self.hdfs_api.write_data("/somefilewithrandomname222", "1") File "/ClickHouse/tests/integration/helpers/hdfs_api.py", line 242, in write_data response = self.req_wrapper( File "/ClickHouse/tests/integration/helpers/hdfs_api.py", line 143, in req_wrapper response_data.raise_for_status() File "/usr/local/lib/python3.10/dist-packages/requests/models.py", line 1021, in raise_for_status raise HTTPError(http_error_msg, response=self) requests.exceptions.HTTPError: 403 Client Error: Forbidden for url: http://172.16.1.2:50075/webhdfs/v1/somefilewithrandomname222?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true&file=%2Fsomefilewithrandomname222&user.name=root 2025-04-04 18:21:25 [ 632 ] DEBUG : write_data protocol:http host:hdfs1 port:50070 path: /somefilewithrandomname222 user:root, principal:None (hdfs_api.py:194, write_data) 2025-04-04 18:21:25 [ 632 ] DEBUG : CALL: {'url': 'http://172.16.1.2:50070/webhdfs/v1/somefilewithrandomname222?op=CREATE', 'allow_redirects': False, 'headers': {'host': '172.16.1.2'}, 'params': {'overwrite': 'true'}, 'verify': False, 'auth': None} (hdfs_api.py:131, req_wrapper) 2025-04-04 18:21:25 [ 632 ] DEBUG : Starting new HTTP connection (1): 172.16.1.2:50070 (connectionpool.py:245, _new_conn) 2025-04-04 18:21:25 [ 632 ] DEBUG : http://172.16.1.2:50070 "PUT /webhdfs/v1/somefilewithrandomname222?op=CREATE&overwrite=true HTTP/1.1" 307 0 (connectionpool.py:547, _make_request) 2025-04-04 18:21:25 [ 632 ] DEBUG : response_data:b'' headers:{'Cache-Control': 'no-cache', 'Expires': 'Fri, 04 Apr 2025 18:21:25 GMT, Fri, 04 Apr 2025 18:21:25 GMT', 'Date': 'Fri, 04 Apr 2025 18:21:25 GMT, Fri, 04 Apr 2025 18:21:25 GMT', 'Pragma': 'no-cache, no-cache', 'Content-Type': 'application/octet-stream', 'Location': 'http://hdfs1:50075/webhdfs/v1/somefilewithrandomname222?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true', 'Content-Length': '0', 'Server': 'Jetty(6.1.26)'} (hdfs_api.py:133, req_wrapper) 2025-04-04 18:21:25 [ 632 ] DEBUG : HDFS api response:{'Cache-Control': 'no-cache', 'Expires': 'Fri, 04 Apr 2025 18:21:25 GMT, Fri, 04 Apr 2025 18:21:25 GMT', 'Date': 'Fri, 04 Apr 2025 18:21:25 GMT, Fri, 04 Apr 2025 18:21:25 GMT', 'Pragma': 'no-cache, no-cache', 'Content-Type': 'application/octet-stream', 'Location': 'http://hdfs1:50075/webhdfs/v1/somefilewithrandomname222?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true', 'Content-Length': '0', 'Server': 'Jetty(6.1.26)'} (hdfs_api.py:228, write_data) 2025-04-04 18:21:25 [ 632 ] DEBUG : CALL: {'url': 'http://172.16.1.2:50075/webhdfs/v1/somefilewithrandomname222?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true', 'data': b'1', 'headers': {'content-type': 'text/plain', 'host': '172.16.1.2'}, 'params': {'file': '/somefilewithrandomname222', 'user.name': 'root'}, 'allow_redirects': False, 'verify': False, 'auth': None} (hdfs_api.py:131, req_wrapper) 2025-04-04 18:21:25 [ 632 ] DEBUG : Starting new HTTP connection (1): 172.16.1.2:50075 (connectionpool.py:245, _new_conn) 2025-04-04 18:21:25 [ 632 ] DEBUG : http://172.16.1.2:50075 "PUT /webhdfs/v1/somefilewithrandomname222?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true&file=%2Fsomefilewithrandomname222&user.name=root HTTP/1.1" 403 None (connectionpool.py:547, _make_request) 2025-04-04 18:21:25 [ 632 ] DEBUG : response_data:b'{"RemoteException":{"exception":"RemoteException","javaClassName":"org.apache.hadoop.ipc.RemoteException","message":"Cannot create file/somefilewithrandomname222. Name node is in safe mode.\\nThe reported blocks 31 has reached the threshold 0.9990 of total blocks 31. The number of live datanodes 1 has reached the minimum number 0. In safe mode extension. Safe mode will be turned off automatically in 25 seconds.\\n\\tat org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkNameNodeSafeMode(FSNamesystem.java:1364)\\n\\tat org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFileInt(FSNamesystem.java:2630)\\n\\tat org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFile(FSNamesystem.java:2519)\\n\\tat org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.create(NameNodeRpcServer.java:566)\\n\\tat org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.create(ClientNamenodeProtocolServerSideTranslatorPB.java:394)\\n\\tat org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)\\n\\tat org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:619)\\n\\tat org.apache.hadoop.ipc.RPC$Server.call(RPC.java:962)\\n\\tat org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2039)\\n\\tat org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2035)\\n\\tat java.security.AccessController.doPrivileged(Native Method)\\n\\tat javax.security.auth.Subject.doAs(Subject.java:415)\\n\\tat org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628)\\n\\tat org.apache.hadoop.ipc.Server$Handler.run(Server.java:2033)\\n"}}' headers:{'Cache-Control': 'no-cache', 'Expires': 'Fri, 04 Apr 2025 18:21:25 GMT, Fri, 04 Apr 2025 18:21:25 GMT', 'Date': 'Fri, 04 Apr 2025 18:21:25 GMT, Fri, 04 Apr 2025 18:21:25 GMT', 'Pragma': 'no-cache, no-cache', 'Content-Type': 'application/json', 'Transfer-Encoding': 'chunked', 'Server': 'Jetty(6.1.26)'} (hdfs_api.py:133, req_wrapper) 2025-04-04 18:21:25 [ 632 ] ERROR : unexpected response_data.status_code 403 != 201 (hdfs_api.py:139, req_wrapper) 2025-04-04 18:21:26 [ 632 ] DEBUG : CALL: {'url': 'http://172.16.1.2:50075/webhdfs/v1/somefilewithrandomname222?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true', 'data': b'1', 'headers': {'content-type': 'text/plain', 'host': '172.16.1.2'}, 'params': {'file': '/somefilewithrandomname222', 'user.name': 'root'}, 'allow_redirects': False, 'verify': False, 'auth': None} (hdfs_api.py:131, req_wrapper) 2025-04-04 18:21:26 [ 632 ] DEBUG : Starting new HTTP connection (1): 172.16.1.2:50075 (connectionpool.py:245, _new_conn) 2025-04-04 18:21:26 [ 632 ] DEBUG : http://172.16.1.2:50075 "PUT /webhdfs/v1/somefilewithrandomname222?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true&file=%2Fsomefilewithrandomname222&user.name=root HTTP/1.1" 403 None (connectionpool.py:547, _make_request) 2025-04-04 18:21:26 [ 632 ] DEBUG : response_data:b'{"RemoteException":{"exception":"RemoteException","javaClassName":"org.apache.hadoop.ipc.RemoteException","message":"Cannot create file/somefilewithrandomname222. Name node is in safe mode.\\nThe reported blocks 31 has reached the threshold 0.9990 of total blocks 31. The number of live datanodes 1 has reached the minimum number 0. In safe mode extension. Safe mode will be turned off automatically in 24 seconds.\\n\\tat org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkNameNodeSafeMode(FSNamesystem.java:1364)\\n\\tat org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFileInt(FSNamesystem.java:2630)\\n\\tat org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFile(FSNamesystem.java:2519)\\n\\tat org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.create(NameNodeRpcServer.java:566)\\n\\tat org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.create(ClientNamenodeProtocolServerSideTranslatorPB.java:394)\\n\\tat org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)\\n\\tat org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:619)\\n\\tat org.apache.hadoop.ipc.RPC$Server.call(RPC.java:962)\\n\\tat org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2039)\\n\\tat org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2035)\\n\\tat java.security.AccessController.doPrivileged(Native Method)\\n\\tat javax.security.auth.Subject.doAs(Subject.java:415)\\n\\tat org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628)\\n\\tat org.apache.hadoop.ipc.Server$Handler.run(Server.java:2033)\\n"}}' headers:{'Cache-Control': 'no-cache', 'Expires': 'Fri, 04 Apr 2025 18:21:26 GMT, Fri, 04 Apr 2025 18:21:26 GMT', 'Date': 'Fri, 04 Apr 2025 18:21:26 GMT, Fri, 04 Apr 2025 18:21:26 GMT', 'Pragma': 'no-cache, no-cache', 'Content-Type': 'application/json', 'Transfer-Encoding': 'chunked', 'Server': 'Jetty(6.1.26)'} (hdfs_api.py:133, req_wrapper) 2025-04-04 18:21:26 [ 632 ] ERROR : unexpected response_data.status_code 403 != 201 (hdfs_api.py:139, req_wrapper) 2025-04-04 18:21:27 [ 632 ] ERROR : Can't connect to HDFS or preparations are not done yet 403 Client Error: Forbidden for url: http://172.16.1.2:50075/webhdfs/v1/somefilewithrandomname222?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true&file=%2Fsomefilewithrandomname222&user.name=root (cluster.py:2572, wait_hdfs_to_start) Traceback (most recent call last): File "/ClickHouse/tests/integration/helpers/cluster.py", line 2565, in wait_hdfs_to_start self.hdfs_api.write_data("/somefilewithrandomname222", "1") File "/ClickHouse/tests/integration/helpers/hdfs_api.py", line 242, in write_data response = self.req_wrapper( File "/ClickHouse/tests/integration/helpers/hdfs_api.py", line 143, in req_wrapper response_data.raise_for_status() File "/usr/local/lib/python3.10/dist-packages/requests/models.py", line 1021, in raise_for_status raise HTTPError(http_error_msg, response=self) requests.exceptions.HTTPError: 403 Client Error: Forbidden for url: http://172.16.1.2:50075/webhdfs/v1/somefilewithrandomname222?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true&file=%2Fsomefilewithrandomname222&user.name=root 2025-04-04 18:21:28 [ 632 ] DEBUG : write_data protocol:http host:hdfs1 port:50070 path: /somefilewithrandomname222 user:root, principal:None (hdfs_api.py:194, write_data) 2025-04-04 18:21:28 [ 632 ] DEBUG : CALL: {'url': 'http://172.16.1.2:50070/webhdfs/v1/somefilewithrandomname222?op=CREATE', 'allow_redirects': False, 'headers': {'host': '172.16.1.2'}, 'params': {'overwrite': 'true'}, 'verify': False, 'auth': None} (hdfs_api.py:131, req_wrapper) 2025-04-04 18:21:28 [ 632 ] DEBUG : Starting new HTTP connection (1): 172.16.1.2:50070 (connectionpool.py:245, _new_conn) 2025-04-04 18:21:28 [ 632 ] DEBUG : http://172.16.1.2:50070 "PUT /webhdfs/v1/somefilewithrandomname222?op=CREATE&overwrite=true HTTP/1.1" 307 0 (connectionpool.py:547, _make_request) 2025-04-04 18:21:28 [ 632 ] DEBUG : response_data:b'' headers:{'Cache-Control': 'no-cache', 'Expires': 'Fri, 04 Apr 2025 18:21:28 GMT, Fri, 04 Apr 2025 18:21:28 GMT', 'Date': 'Fri, 04 Apr 2025 18:21:28 GMT, Fri, 04 Apr 2025 18:21:28 GMT', 'Pragma': 'no-cache, no-cache', 'Content-Type': 'application/octet-stream', 'Location': 'http://hdfs1:50075/webhdfs/v1/somefilewithrandomname222?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true', 'Content-Length': '0', 'Server': 'Jetty(6.1.26)'} (hdfs_api.py:133, req_wrapper) 2025-04-04 18:21:28 [ 632 ] DEBUG : HDFS api response:{'Cache-Control': 'no-cache', 'Expires': 'Fri, 04 Apr 2025 18:21:28 GMT, Fri, 04 Apr 2025 18:21:28 GMT', 'Date': 'Fri, 04 Apr 2025 18:21:28 GMT, Fri, 04 Apr 2025 18:21:28 GMT', 'Pragma': 'no-cache, no-cache', 'Content-Type': 'application/octet-stream', 'Location': 'http://hdfs1:50075/webhdfs/v1/somefilewithrandomname222?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true', 'Content-Length': '0', 'Server': 'Jetty(6.1.26)'} (hdfs_api.py:228, write_data) 2025-04-04 18:21:28 [ 632 ] DEBUG : CALL: {'url': 'http://172.16.1.2:50075/webhdfs/v1/somefilewithrandomname222?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true', 'data': b'1', 'headers': {'content-type': 'text/plain', 'host': '172.16.1.2'}, 'params': {'file': '/somefilewithrandomname222', 'user.name': 'root'}, 'allow_redirects': False, 'verify': False, 'auth': None} (hdfs_api.py:131, req_wrapper) 2025-04-04 18:21:28 [ 632 ] DEBUG : Starting new HTTP connection (1): 172.16.1.2:50075 (connectionpool.py:245, _new_conn) 2025-04-04 18:21:28 [ 632 ] DEBUG : http://172.16.1.2:50075 "PUT /webhdfs/v1/somefilewithrandomname222?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true&file=%2Fsomefilewithrandomname222&user.name=root HTTP/1.1" 403 None (connectionpool.py:547, _make_request) 2025-04-04 18:21:28 [ 632 ] DEBUG : response_data:b'{"RemoteException":{"exception":"RemoteException","javaClassName":"org.apache.hadoop.ipc.RemoteException","message":"Cannot create file/somefilewithrandomname222. Name node is in safe mode.\\nThe reported blocks 31 has reached the threshold 0.9990 of total blocks 31. The number of live datanodes 1 has reached the minimum number 0. In safe mode extension. Safe mode will be turned off automatically in 22 seconds.\\n\\tat org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkNameNodeSafeMode(FSNamesystem.java:1364)\\n\\tat org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFileInt(FSNamesystem.java:2630)\\n\\tat org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFile(FSNamesystem.java:2519)\\n\\tat org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.create(NameNodeRpcServer.java:566)\\n\\tat org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.create(ClientNamenodeProtocolServerSideTranslatorPB.java:394)\\n\\tat org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)\\n\\tat org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:619)\\n\\tat org.apache.hadoop.ipc.RPC$Server.call(RPC.java:962)\\n\\tat org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2039)\\n\\tat org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2035)\\n\\tat java.security.AccessController.doPrivileged(Native Method)\\n\\tat javax.security.auth.Subject.doAs(Subject.java:415)\\n\\tat org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628)\\n\\tat org.apache.hadoop.ipc.Server$Handler.run(Server.java:2033)\\n"}}' headers:{'Cache-Control': 'no-cache', 'Expires': 'Fri, 04 Apr 2025 18:21:28 GMT, Fri, 04 Apr 2025 18:21:28 GMT', 'Date': 'Fri, 04 Apr 2025 18:21:28 GMT, Fri, 04 Apr 2025 18:21:28 GMT', 'Pragma': 'no-cache, no-cache', 'Content-Type': 'application/json', 'Transfer-Encoding': 'chunked', 'Server': 'Jetty(6.1.26)'} (hdfs_api.py:133, req_wrapper) 2025-04-04 18:21:28 [ 632 ] ERROR : unexpected response_data.status_code 403 != 201 (hdfs_api.py:139, req_wrapper) 2025-04-04 18:21:29 [ 632 ] DEBUG : CALL: {'url': 'http://172.16.1.2:50075/webhdfs/v1/somefilewithrandomname222?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true', 'data': b'1', 'headers': {'content-type': 'text/plain', 'host': '172.16.1.2'}, 'params': {'file': '/somefilewithrandomname222', 'user.name': 'root'}, 'allow_redirects': False, 'verify': False, 'auth': None} (hdfs_api.py:131, req_wrapper) 2025-04-04 18:21:29 [ 632 ] DEBUG : Starting new HTTP connection (1): 172.16.1.2:50075 (connectionpool.py:245, _new_conn) 2025-04-04 18:21:29 [ 632 ] DEBUG : http://172.16.1.2:50075 "PUT /webhdfs/v1/somefilewithrandomname222?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true&file=%2Fsomefilewithrandomname222&user.name=root HTTP/1.1" 403 None (connectionpool.py:547, _make_request) 2025-04-04 18:21:29 [ 632 ] DEBUG : response_data:b'{"RemoteException":{"exception":"RemoteException","javaClassName":"org.apache.hadoop.ipc.RemoteException","message":"Cannot create file/somefilewithrandomname222. Name node is in safe mode.\\nThe reported blocks 31 has reached the threshold 0.9990 of total blocks 31. The number of live datanodes 1 has reached the minimum number 0. In safe mode extension. Safe mode will be turned off automatically in 21 seconds.\\n\\tat org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkNameNodeSafeMode(FSNamesystem.java:1364)\\n\\tat org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFileInt(FSNamesystem.java:2630)\\n\\tat org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFile(FSNamesystem.java:2519)\\n\\tat org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.create(NameNodeRpcServer.java:566)\\n\\tat org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.create(ClientNamenodeProtocolServerSideTranslatorPB.java:394)\\n\\tat org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)\\n\\tat org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:619)\\n\\tat org.apache.hadoop.ipc.RPC$Server.call(RPC.java:962)\\n\\tat org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2039)\\n\\tat org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2035)\\n\\tat java.security.AccessController.doPrivileged(Native Method)\\n\\tat javax.security.auth.Subject.doAs(Subject.java:415)\\n\\tat org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628)\\n\\tat org.apache.hadoop.ipc.Server$Handler.run(Server.java:2033)\\n"}}' headers:{'Cache-Control': 'no-cache', 'Expires': 'Fri, 04 Apr 2025 18:21:29 GMT, Fri, 04 Apr 2025 18:21:29 GMT', 'Date': 'Fri, 04 Apr 2025 18:21:29 GMT, Fri, 04 Apr 2025 18:21:29 GMT', 'Pragma': 'no-cache, no-cache', 'Content-Type': 'application/json', 'Transfer-Encoding': 'chunked', 'Server': 'Jetty(6.1.26)'} (hdfs_api.py:133, req_wrapper) 2025-04-04 18:21:29 [ 632 ] ERROR : unexpected response_data.status_code 403 != 201 (hdfs_api.py:139, req_wrapper) 2025-04-04 18:21:30 [ 632 ] ERROR : Can't connect to HDFS or preparations are not done yet 403 Client Error: Forbidden for url: http://172.16.1.2:50075/webhdfs/v1/somefilewithrandomname222?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true&file=%2Fsomefilewithrandomname222&user.name=root (cluster.py:2572, wait_hdfs_to_start) Traceback (most recent call last): File "/ClickHouse/tests/integration/helpers/cluster.py", line 2565, in wait_hdfs_to_start self.hdfs_api.write_data("/somefilewithrandomname222", "1") File "/ClickHouse/tests/integration/helpers/hdfs_api.py", line 242, in write_data response = self.req_wrapper( File "/ClickHouse/tests/integration/helpers/hdfs_api.py", line 143, in req_wrapper response_data.raise_for_status() File "/usr/local/lib/python3.10/dist-packages/requests/models.py", line 1021, in raise_for_status raise HTTPError(http_error_msg, response=self) requests.exceptions.HTTPError: 403 Client Error: Forbidden for url: http://172.16.1.2:50075/webhdfs/v1/somefilewithrandomname222?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true&file=%2Fsomefilewithrandomname222&user.name=root 2025-04-04 18:21:31 [ 632 ] DEBUG : write_data protocol:http host:hdfs1 port:50070 path: /somefilewithrandomname222 user:root, principal:None (hdfs_api.py:194, write_data) 2025-04-04 18:21:31 [ 632 ] DEBUG : CALL: {'url': 'http://172.16.1.2:50070/webhdfs/v1/somefilewithrandomname222?op=CREATE', 'allow_redirects': False, 'headers': {'host': '172.16.1.2'}, 'params': {'overwrite': 'true'}, 'verify': False, 'auth': None} (hdfs_api.py:131, req_wrapper) 2025-04-04 18:21:31 [ 632 ] DEBUG : Starting new HTTP connection (1): 172.16.1.2:50070 (connectionpool.py:245, _new_conn) 2025-04-04 18:21:31 [ 632 ] DEBUG : http://172.16.1.2:50070 "PUT /webhdfs/v1/somefilewithrandomname222?op=CREATE&overwrite=true HTTP/1.1" 307 0 (connectionpool.py:547, _make_request) 2025-04-04 18:21:31 [ 632 ] DEBUG : response_data:b'' headers:{'Cache-Control': 'no-cache', 'Expires': 'Fri, 04 Apr 2025 18:21:31 GMT, Fri, 04 Apr 2025 18:21:31 GMT', 'Date': 'Fri, 04 Apr 2025 18:21:31 GMT, Fri, 04 Apr 2025 18:21:31 GMT', 'Pragma': 'no-cache, no-cache', 'Content-Type': 'application/octet-stream', 'Location': 'http://hdfs1:50075/webhdfs/v1/somefilewithrandomname222?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true', 'Content-Length': '0', 'Server': 'Jetty(6.1.26)'} (hdfs_api.py:133, req_wrapper) 2025-04-04 18:21:31 [ 632 ] DEBUG : HDFS api response:{'Cache-Control': 'no-cache', 'Expires': 'Fri, 04 Apr 2025 18:21:31 GMT, Fri, 04 Apr 2025 18:21:31 GMT', 'Date': 'Fri, 04 Apr 2025 18:21:31 GMT, Fri, 04 Apr 2025 18:21:31 GMT', 'Pragma': 'no-cache, no-cache', 'Content-Type': 'application/octet-stream', 'Location': 'http://hdfs1:50075/webhdfs/v1/somefilewithrandomname222?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true', 'Content-Length': '0', 'Server': 'Jetty(6.1.26)'} (hdfs_api.py:228, write_data) 2025-04-04 18:21:31 [ 632 ] DEBUG : CALL: {'url': 'http://172.16.1.2:50075/webhdfs/v1/somefilewithrandomname222?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true', 'data': b'1', 'headers': {'content-type': 'text/plain', 'host': '172.16.1.2'}, 'params': {'file': '/somefilewithrandomname222', 'user.name': 'root'}, 'allow_redirects': False, 'verify': False, 'auth': None} (hdfs_api.py:131, req_wrapper) 2025-04-04 18:21:31 [ 632 ] DEBUG : Starting new HTTP connection (1): 172.16.1.2:50075 (connectionpool.py:245, _new_conn) 2025-04-04 18:21:31 [ 632 ] DEBUG : http://172.16.1.2:50075 "PUT /webhdfs/v1/somefilewithrandomname222?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true&file=%2Fsomefilewithrandomname222&user.name=root HTTP/1.1" 403 None (connectionpool.py:547, _make_request) 2025-04-04 18:21:31 [ 632 ] DEBUG : response_data:b'{"RemoteException":{"exception":"RemoteException","javaClassName":"org.apache.hadoop.ipc.RemoteException","message":"Cannot create file/somefilewithrandomname222. Name node is in safe mode.\\nThe reported blocks 31 has reached the threshold 0.9990 of total blocks 31. The number of live datanodes 1 has reached the minimum number 0. In safe mode extension. Safe mode will be turned off automatically in 19 seconds.\\n\\tat org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkNameNodeSafeMode(FSNamesystem.java:1364)\\n\\tat org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFileInt(FSNamesystem.java:2630)\\n\\tat org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFile(FSNamesystem.java:2519)\\n\\tat org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.create(NameNodeRpcServer.java:566)\\n\\tat org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.create(ClientNamenodeProtocolServerSideTranslatorPB.java:394)\\n\\tat org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)\\n\\tat org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:619)\\n\\tat org.apache.hadoop.ipc.RPC$Server.call(RPC.java:962)\\n\\tat org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2039)\\n\\tat org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2035)\\n\\tat java.security.AccessController.doPrivileged(Native Method)\\n\\tat javax.security.auth.Subject.doAs(Subject.java:415)\\n\\tat org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628)\\n\\tat org.apache.hadoop.ipc.Server$Handler.run(Server.java:2033)\\n"}}' headers:{'Cache-Control': 'no-cache', 'Expires': 'Fri, 04 Apr 2025 18:21:31 GMT, Fri, 04 Apr 2025 18:21:31 GMT', 'Date': 'Fri, 04 Apr 2025 18:21:31 GMT, Fri, 04 Apr 2025 18:21:31 GMT', 'Pragma': 'no-cache, no-cache', 'Content-Type': 'application/json', 'Transfer-Encoding': 'chunked', 'Server': 'Jetty(6.1.26)'} (hdfs_api.py:133, req_wrapper) 2025-04-04 18:21:31 [ 632 ] ERROR : unexpected response_data.status_code 403 != 201 (hdfs_api.py:139, req_wrapper) 2025-04-04 18:21:32 [ 632 ] DEBUG : CALL: {'url': 'http://172.16.1.2:50075/webhdfs/v1/somefilewithrandomname222?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true', 'data': b'1', 'headers': {'content-type': 'text/plain', 'host': '172.16.1.2'}, 'params': {'file': '/somefilewithrandomname222', 'user.name': 'root'}, 'allow_redirects': False, 'verify': False, 'auth': None} (hdfs_api.py:131, req_wrapper) 2025-04-04 18:21:32 [ 632 ] DEBUG : Starting new HTTP connection (1): 172.16.1.2:50075 (connectionpool.py:245, _new_conn) 2025-04-04 18:21:32 [ 632 ] DEBUG : http://172.16.1.2:50075 "PUT /webhdfs/v1/somefilewithrandomname222?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true&file=%2Fsomefilewithrandomname222&user.name=root HTTP/1.1" 403 None (connectionpool.py:547, _make_request) 2025-04-04 18:21:32 [ 632 ] DEBUG : response_data:b'{"RemoteException":{"exception":"RemoteException","javaClassName":"org.apache.hadoop.ipc.RemoteException","message":"Cannot create file/somefilewithrandomname222. Name node is in safe mode.\\nThe reported blocks 31 has reached the threshold 0.9990 of total blocks 31. The number of live datanodes 1 has reached the minimum number 0. In safe mode extension. Safe mode will be turned off automatically in 18 seconds.\\n\\tat org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkNameNodeSafeMode(FSNamesystem.java:1364)\\n\\tat org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFileInt(FSNamesystem.java:2630)\\n\\tat org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFile(FSNamesystem.java:2519)\\n\\tat org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.create(NameNodeRpcServer.java:566)\\n\\tat org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.create(ClientNamenodeProtocolServerSideTranslatorPB.java:394)\\n\\tat org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)\\n\\tat org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:619)\\n\\tat org.apache.hadoop.ipc.RPC$Server.call(RPC.java:962)\\n\\tat org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2039)\\n\\tat org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2035)\\n\\tat java.security.AccessController.doPrivileged(Native Method)\\n\\tat javax.security.auth.Subject.doAs(Subject.java:415)\\n\\tat org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628)\\n\\tat org.apache.hadoop.ipc.Server$Handler.run(Server.java:2033)\\n"}}' headers:{'Cache-Control': 'no-cache', 'Expires': 'Fri, 04 Apr 2025 18:21:32 GMT, Fri, 04 Apr 2025 18:21:32 GMT', 'Date': 'Fri, 04 Apr 2025 18:21:32 GMT, Fri, 04 Apr 2025 18:21:32 GMT', 'Pragma': 'no-cache, no-cache', 'Content-Type': 'application/json', 'Transfer-Encoding': 'chunked', 'Server': 'Jetty(6.1.26)'} (hdfs_api.py:133, req_wrapper) 2025-04-04 18:21:32 [ 632 ] ERROR : unexpected response_data.status_code 403 != 201 (hdfs_api.py:139, req_wrapper) 2025-04-04 18:21:33 [ 632 ] ERROR : Can't connect to HDFS or preparations are not done yet 403 Client Error: Forbidden for url: http://172.16.1.2:50075/webhdfs/v1/somefilewithrandomname222?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true&file=%2Fsomefilewithrandomname222&user.name=root (cluster.py:2572, wait_hdfs_to_start) Traceback (most recent call last): File "/ClickHouse/tests/integration/helpers/cluster.py", line 2565, in wait_hdfs_to_start self.hdfs_api.write_data("/somefilewithrandomname222", "1") File "/ClickHouse/tests/integration/helpers/hdfs_api.py", line 242, in write_data response = self.req_wrapper( File "/ClickHouse/tests/integration/helpers/hdfs_api.py", line 143, in req_wrapper response_data.raise_for_status() File "/usr/local/lib/python3.10/dist-packages/requests/models.py", line 1021, in raise_for_status raise HTTPError(http_error_msg, response=self) requests.exceptions.HTTPError: 403 Client Error: Forbidden for url: http://172.16.1.2:50075/webhdfs/v1/somefilewithrandomname222?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true&file=%2Fsomefilewithrandomname222&user.name=root 2025-04-04 18:21:34 [ 632 ] DEBUG : write_data protocol:http host:hdfs1 port:50070 path: /somefilewithrandomname222 user:root, principal:None (hdfs_api.py:194, write_data) 2025-04-04 18:21:34 [ 632 ] DEBUG : CALL: {'url': 'http://172.16.1.2:50070/webhdfs/v1/somefilewithrandomname222?op=CREATE', 'allow_redirects': False, 'headers': {'host': '172.16.1.2'}, 'params': {'overwrite': 'true'}, 'verify': False, 'auth': None} (hdfs_api.py:131, req_wrapper) 2025-04-04 18:21:34 [ 632 ] DEBUG : Starting new HTTP connection (1): 172.16.1.2:50070 (connectionpool.py:245, _new_conn) 2025-04-04 18:21:34 [ 632 ] DEBUG : http://172.16.1.2:50070 "PUT /webhdfs/v1/somefilewithrandomname222?op=CREATE&overwrite=true HTTP/1.1" 307 0 (connectionpool.py:547, _make_request) 2025-04-04 18:21:34 [ 632 ] DEBUG : response_data:b'' headers:{'Cache-Control': 'no-cache', 'Expires': 'Fri, 04 Apr 2025 18:21:34 GMT, Fri, 04 Apr 2025 18:21:34 GMT', 'Date': 'Fri, 04 Apr 2025 18:21:34 GMT, Fri, 04 Apr 2025 18:21:34 GMT', 'Pragma': 'no-cache, no-cache', 'Content-Type': 'application/octet-stream', 'Location': 'http://hdfs1:50075/webhdfs/v1/somefilewithrandomname222?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true', 'Content-Length': '0', 'Server': 'Jetty(6.1.26)'} (hdfs_api.py:133, req_wrapper) 2025-04-04 18:21:34 [ 632 ] DEBUG : HDFS api response:{'Cache-Control': 'no-cache', 'Expires': 'Fri, 04 Apr 2025 18:21:34 GMT, Fri, 04 Apr 2025 18:21:34 GMT', 'Date': 'Fri, 04 Apr 2025 18:21:34 GMT, Fri, 04 Apr 2025 18:21:34 GMT', 'Pragma': 'no-cache, no-cache', 'Content-Type': 'application/octet-stream', 'Location': 'http://hdfs1:50075/webhdfs/v1/somefilewithrandomname222?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true', 'Content-Length': '0', 'Server': 'Jetty(6.1.26)'} (hdfs_api.py:228, write_data) 2025-04-04 18:21:34 [ 632 ] DEBUG : CALL: {'url': 'http://172.16.1.2:50075/webhdfs/v1/somefilewithrandomname222?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true', 'data': b'1', 'headers': {'content-type': 'text/plain', 'host': '172.16.1.2'}, 'params': {'file': '/somefilewithrandomname222', 'user.name': 'root'}, 'allow_redirects': False, 'verify': False, 'auth': None} (hdfs_api.py:131, req_wrapper) 2025-04-04 18:21:34 [ 632 ] DEBUG : Starting new HTTP connection (1): 172.16.1.2:50075 (connectionpool.py:245, _new_conn) 2025-04-04 18:21:34 [ 632 ] DEBUG : http://172.16.1.2:50075 "PUT /webhdfs/v1/somefilewithrandomname222?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true&file=%2Fsomefilewithrandomname222&user.name=root HTTP/1.1" 403 None (connectionpool.py:547, _make_request) 2025-04-04 18:21:34 [ 632 ] DEBUG : response_data:b'{"RemoteException":{"exception":"RemoteException","javaClassName":"org.apache.hadoop.ipc.RemoteException","message":"Cannot create file/somefilewithrandomname222. Name node is in safe mode.\\nThe reported blocks 31 has reached the threshold 0.9990 of total blocks 31. The number of live datanodes 1 has reached the minimum number 0. In safe mode extension. Safe mode will be turned off automatically in 16 seconds.\\n\\tat org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkNameNodeSafeMode(FSNamesystem.java:1364)\\n\\tat org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFileInt(FSNamesystem.java:2630)\\n\\tat org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFile(FSNamesystem.java:2519)\\n\\tat org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.create(NameNodeRpcServer.java:566)\\n\\tat org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.create(ClientNamenodeProtocolServerSideTranslatorPB.java:394)\\n\\tat org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)\\n\\tat org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:619)\\n\\tat org.apache.hadoop.ipc.RPC$Server.call(RPC.java:962)\\n\\tat org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2039)\\n\\tat org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2035)\\n\\tat java.security.AccessController.doPrivileged(Native Method)\\n\\tat javax.security.auth.Subject.doAs(Subject.java:415)\\n\\tat org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628)\\n\\tat org.apache.hadoop.ipc.Server$Handler.run(Server.java:2033)\\n"}}' headers:{'Cache-Control': 'no-cache', 'Expires': 'Fri, 04 Apr 2025 18:21:34 GMT, Fri, 04 Apr 2025 18:21:34 GMT', 'Date': 'Fri, 04 Apr 2025 18:21:34 GMT, Fri, 04 Apr 2025 18:21:34 GMT', 'Pragma': 'no-cache, no-cache', 'Content-Type': 'application/json', 'Transfer-Encoding': 'chunked', 'Server': 'Jetty(6.1.26)'} (hdfs_api.py:133, req_wrapper) 2025-04-04 18:21:34 [ 632 ] ERROR : unexpected response_data.status_code 403 != 201 (hdfs_api.py:139, req_wrapper) 2025-04-04 18:21:35 [ 632 ] DEBUG : CALL: {'url': 'http://172.16.1.2:50075/webhdfs/v1/somefilewithrandomname222?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true', 'data': b'1', 'headers': {'content-type': 'text/plain', 'host': '172.16.1.2'}, 'params': {'file': '/somefilewithrandomname222', 'user.name': 'root'}, 'allow_redirects': False, 'verify': False, 'auth': None} (hdfs_api.py:131, req_wrapper) 2025-04-04 18:21:35 [ 632 ] DEBUG : Starting new HTTP connection (1): 172.16.1.2:50075 (connectionpool.py:245, _new_conn) 2025-04-04 18:21:35 [ 632 ] DEBUG : http://172.16.1.2:50075 "PUT /webhdfs/v1/somefilewithrandomname222?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true&file=%2Fsomefilewithrandomname222&user.name=root HTTP/1.1" 403 None (connectionpool.py:547, _make_request) 2025-04-04 18:21:35 [ 632 ] DEBUG : response_data:b'{"RemoteException":{"exception":"RemoteException","javaClassName":"org.apache.hadoop.ipc.RemoteException","message":"Cannot create file/somefilewithrandomname222. Name node is in safe mode.\\nThe reported blocks 31 has reached the threshold 0.9990 of total blocks 31. The number of live datanodes 1 has reached the minimum number 0. In safe mode extension. Safe mode will be turned off automatically in 15 seconds.\\n\\tat org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkNameNodeSafeMode(FSNamesystem.java:1364)\\n\\tat org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFileInt(FSNamesystem.java:2630)\\n\\tat org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFile(FSNamesystem.java:2519)\\n\\tat org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.create(NameNodeRpcServer.java:566)\\n\\tat org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.create(ClientNamenodeProtocolServerSideTranslatorPB.java:394)\\n\\tat org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)\\n\\tat org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:619)\\n\\tat org.apache.hadoop.ipc.RPC$Server.call(RPC.java:962)\\n\\tat org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2039)\\n\\tat org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2035)\\n\\tat java.security.AccessController.doPrivileged(Native Method)\\n\\tat javax.security.auth.Subject.doAs(Subject.java:415)\\n\\tat org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628)\\n\\tat org.apache.hadoop.ipc.Server$Handler.run(Server.java:2033)\\n"}}' headers:{'Cache-Control': 'no-cache', 'Expires': 'Fri, 04 Apr 2025 18:21:35 GMT, Fri, 04 Apr 2025 18:21:35 GMT', 'Date': 'Fri, 04 Apr 2025 18:21:35 GMT, Fri, 04 Apr 2025 18:21:35 GMT', 'Pragma': 'no-cache, no-cache', 'Content-Type': 'application/json', 'Transfer-Encoding': 'chunked', 'Server': 'Jetty(6.1.26)'} (hdfs_api.py:133, req_wrapper) 2025-04-04 18:21:35 [ 632 ] ERROR : unexpected response_data.status_code 403 != 201 (hdfs_api.py:139, req_wrapper) 2025-04-04 18:21:36 [ 632 ] ERROR : Can't connect to HDFS or preparations are not done yet 403 Client Error: Forbidden for url: http://172.16.1.2:50075/webhdfs/v1/somefilewithrandomname222?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true&file=%2Fsomefilewithrandomname222&user.name=root (cluster.py:2572, wait_hdfs_to_start) Traceback (most recent call last): File "/ClickHouse/tests/integration/helpers/cluster.py", line 2565, in wait_hdfs_to_start self.hdfs_api.write_data("/somefilewithrandomname222", "1") File "/ClickHouse/tests/integration/helpers/hdfs_api.py", line 242, in write_data response = self.req_wrapper( File "/ClickHouse/tests/integration/helpers/hdfs_api.py", line 143, in req_wrapper response_data.raise_for_status() File "/usr/local/lib/python3.10/dist-packages/requests/models.py", line 1021, in raise_for_status raise HTTPError(http_error_msg, response=self) requests.exceptions.HTTPError: 403 Client Error: Forbidden for url: http://172.16.1.2:50075/webhdfs/v1/somefilewithrandomname222?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true&file=%2Fsomefilewithrandomname222&user.name=root 2025-04-04 18:21:37 [ 632 ] DEBUG : write_data protocol:http host:hdfs1 port:50070 path: /somefilewithrandomname222 user:root, principal:None (hdfs_api.py:194, write_data) 2025-04-04 18:21:37 [ 632 ] DEBUG : CALL: {'url': 'http://172.16.1.2:50070/webhdfs/v1/somefilewithrandomname222?op=CREATE', 'allow_redirects': False, 'headers': {'host': '172.16.1.2'}, 'params': {'overwrite': 'true'}, 'verify': False, 'auth': None} (hdfs_api.py:131, req_wrapper) 2025-04-04 18:21:37 [ 632 ] DEBUG : Starting new HTTP connection (1): 172.16.1.2:50070 (connectionpool.py:245, _new_conn) 2025-04-04 18:21:37 [ 632 ] DEBUG : http://172.16.1.2:50070 "PUT /webhdfs/v1/somefilewithrandomname222?op=CREATE&overwrite=true HTTP/1.1" 307 0 (connectionpool.py:547, _make_request) 2025-04-04 18:21:37 [ 632 ] DEBUG : response_data:b'' headers:{'Cache-Control': 'no-cache', 'Expires': 'Fri, 04 Apr 2025 18:21:37 GMT, Fri, 04 Apr 2025 18:21:37 GMT', 'Date': 'Fri, 04 Apr 2025 18:21:37 GMT, Fri, 04 Apr 2025 18:21:37 GMT', 'Pragma': 'no-cache, no-cache', 'Content-Type': 'application/octet-stream', 'Location': 'http://hdfs1:50075/webhdfs/v1/somefilewithrandomname222?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true', 'Content-Length': '0', 'Server': 'Jetty(6.1.26)'} (hdfs_api.py:133, req_wrapper) 2025-04-04 18:21:37 [ 632 ] DEBUG : HDFS api response:{'Cache-Control': 'no-cache', 'Expires': 'Fri, 04 Apr 2025 18:21:37 GMT, Fri, 04 Apr 2025 18:21:37 GMT', 'Date': 'Fri, 04 Apr 2025 18:21:37 GMT, Fri, 04 Apr 2025 18:21:37 GMT', 'Pragma': 'no-cache, no-cache', 'Content-Type': 'application/octet-stream', 'Location': 'http://hdfs1:50075/webhdfs/v1/somefilewithrandomname222?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true', 'Content-Length': '0', 'Server': 'Jetty(6.1.26)'} (hdfs_api.py:228, write_data) 2025-04-04 18:21:37 [ 632 ] DEBUG : CALL: {'url': 'http://172.16.1.2:50075/webhdfs/v1/somefilewithrandomname222?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true', 'data': b'1', 'headers': {'content-type': 'text/plain', 'host': '172.16.1.2'}, 'params': {'file': '/somefilewithrandomname222', 'user.name': 'root'}, 'allow_redirects': False, 'verify': False, 'auth': None} (hdfs_api.py:131, req_wrapper) 2025-04-04 18:21:37 [ 632 ] DEBUG : Starting new HTTP connection (1): 172.16.1.2:50075 (connectionpool.py:245, _new_conn) 2025-04-04 18:21:37 [ 632 ] DEBUG : http://172.16.1.2:50075 "PUT /webhdfs/v1/somefilewithrandomname222?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true&file=%2Fsomefilewithrandomname222&user.name=root HTTP/1.1" 403 None (connectionpool.py:547, _make_request) 2025-04-04 18:21:37 [ 632 ] DEBUG : response_data:b'{"RemoteException":{"exception":"RemoteException","javaClassName":"org.apache.hadoop.ipc.RemoteException","message":"Cannot create file/somefilewithrandomname222. Name node is in safe mode.\\nThe reported blocks 31 has reached the threshold 0.9990 of total blocks 31. The number of live datanodes 1 has reached the minimum number 0. In safe mode extension. Safe mode will be turned off automatically in 13 seconds.\\n\\tat org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkNameNodeSafeMode(FSNamesystem.java:1364)\\n\\tat org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFileInt(FSNamesystem.java:2630)\\n\\tat org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFile(FSNamesystem.java:2519)\\n\\tat org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.create(NameNodeRpcServer.java:566)\\n\\tat org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.create(ClientNamenodeProtocolServerSideTranslatorPB.java:394)\\n\\tat org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)\\n\\tat org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:619)\\n\\tat org.apache.hadoop.ipc.RPC$Server.call(RPC.java:962)\\n\\tat org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2039)\\n\\tat org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2035)\\n\\tat java.security.AccessController.doPrivileged(Native Method)\\n\\tat javax.security.auth.Subject.doAs(Subject.java:415)\\n\\tat org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628)\\n\\tat org.apache.hadoop.ipc.Server$Handler.run(Server.java:2033)\\n"}}' headers:{'Cache-Control': 'no-cache', 'Expires': 'Fri, 04 Apr 2025 18:21:37 GMT, Fri, 04 Apr 2025 18:21:37 GMT', 'Date': 'Fri, 04 Apr 2025 18:21:37 GMT, Fri, 04 Apr 2025 18:21:37 GMT', 'Pragma': 'no-cache, no-cache', 'Content-Type': 'application/json', 'Transfer-Encoding': 'chunked', 'Server': 'Jetty(6.1.26)'} (hdfs_api.py:133, req_wrapper) 2025-04-04 18:21:37 [ 632 ] ERROR : unexpected response_data.status_code 403 != 201 (hdfs_api.py:139, req_wrapper) 2025-04-04 18:21:38 [ 632 ] DEBUG : CALL: {'url': 'http://172.16.1.2:50075/webhdfs/v1/somefilewithrandomname222?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true', 'data': b'1', 'headers': {'content-type': 'text/plain', 'host': '172.16.1.2'}, 'params': {'file': '/somefilewithrandomname222', 'user.name': 'root'}, 'allow_redirects': False, 'verify': False, 'auth': None} (hdfs_api.py:131, req_wrapper) 2025-04-04 18:21:38 [ 632 ] DEBUG : Starting new HTTP connection (1): 172.16.1.2:50075 (connectionpool.py:245, _new_conn) 2025-04-04 18:21:38 [ 632 ] DEBUG : http://172.16.1.2:50075 "PUT /webhdfs/v1/somefilewithrandomname222?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true&file=%2Fsomefilewithrandomname222&user.name=root HTTP/1.1" 403 None (connectionpool.py:547, _make_request) 2025-04-04 18:21:38 [ 632 ] DEBUG : response_data:b'{"RemoteException":{"exception":"RemoteException","javaClassName":"org.apache.hadoop.ipc.RemoteException","message":"Cannot create file/somefilewithrandomname222. Name node is in safe mode.\\nThe reported blocks 31 has reached the threshold 0.9990 of total blocks 31. The number of live datanodes 1 has reached the minimum number 0. In safe mode extension. Safe mode will be turned off automatically in 12 seconds.\\n\\tat org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkNameNodeSafeMode(FSNamesystem.java:1364)\\n\\tat org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFileInt(FSNamesystem.java:2630)\\n\\tat org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFile(FSNamesystem.java:2519)\\n\\tat org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.create(NameNodeRpcServer.java:566)\\n\\tat org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.create(ClientNamenodeProtocolServerSideTranslatorPB.java:394)\\n\\tat org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)\\n\\tat org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:619)\\n\\tat org.apache.hadoop.ipc.RPC$Server.call(RPC.java:962)\\n\\tat org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2039)\\n\\tat org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2035)\\n\\tat java.security.AccessController.doPrivileged(Native Method)\\n\\tat javax.security.auth.Subject.doAs(Subject.java:415)\\n\\tat org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628)\\n\\tat org.apache.hadoop.ipc.Server$Handler.run(Server.java:2033)\\n"}}' headers:{'Cache-Control': 'no-cache', 'Expires': 'Fri, 04 Apr 2025 18:21:38 GMT, Fri, 04 Apr 2025 18:21:38 GMT', 'Date': 'Fri, 04 Apr 2025 18:21:38 GMT, Fri, 04 Apr 2025 18:21:38 GMT', 'Pragma': 'no-cache, no-cache', 'Content-Type': 'application/json', 'Transfer-Encoding': 'chunked', 'Server': 'Jetty(6.1.26)'} (hdfs_api.py:133, req_wrapper) 2025-04-04 18:21:38 [ 632 ] ERROR : unexpected response_data.status_code 403 != 201 (hdfs_api.py:139, req_wrapper) 2025-04-04 18:21:39 [ 632 ] ERROR : Can't connect to HDFS or preparations are not done yet 403 Client Error: Forbidden for url: http://172.16.1.2:50075/webhdfs/v1/somefilewithrandomname222?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true&file=%2Fsomefilewithrandomname222&user.name=root (cluster.py:2572, wait_hdfs_to_start) Traceback (most recent call last): File "/ClickHouse/tests/integration/helpers/cluster.py", line 2565, in wait_hdfs_to_start self.hdfs_api.write_data("/somefilewithrandomname222", "1") File "/ClickHouse/tests/integration/helpers/hdfs_api.py", line 242, in write_data response = self.req_wrapper( File "/ClickHouse/tests/integration/helpers/hdfs_api.py", line 143, in req_wrapper response_data.raise_for_status() File "/usr/local/lib/python3.10/dist-packages/requests/models.py", line 1021, in raise_for_status raise HTTPError(http_error_msg, response=self) requests.exceptions.HTTPError: 403 Client Error: Forbidden for url: http://172.16.1.2:50075/webhdfs/v1/somefilewithrandomname222?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true&file=%2Fsomefilewithrandomname222&user.name=root 2025-04-04 18:21:40 [ 632 ] DEBUG : write_data protocol:http host:hdfs1 port:50070 path: /somefilewithrandomname222 user:root, principal:None (hdfs_api.py:194, write_data) 2025-04-04 18:21:40 [ 632 ] DEBUG : CALL: {'url': 'http://172.16.1.2:50070/webhdfs/v1/somefilewithrandomname222?op=CREATE', 'allow_redirects': False, 'headers': {'host': '172.16.1.2'}, 'params': {'overwrite': 'true'}, 'verify': False, 'auth': None} (hdfs_api.py:131, req_wrapper) 2025-04-04 18:21:40 [ 632 ] DEBUG : Starting new HTTP connection (1): 172.16.1.2:50070 (connectionpool.py:245, _new_conn) 2025-04-04 18:21:40 [ 632 ] DEBUG : http://172.16.1.2:50070 "PUT /webhdfs/v1/somefilewithrandomname222?op=CREATE&overwrite=true HTTP/1.1" 307 0 (connectionpool.py:547, _make_request) 2025-04-04 18:21:40 [ 632 ] DEBUG : response_data:b'' headers:{'Cache-Control': 'no-cache', 'Expires': 'Fri, 04 Apr 2025 18:21:40 GMT, Fri, 04 Apr 2025 18:21:40 GMT', 'Date': 'Fri, 04 Apr 2025 18:21:40 GMT, Fri, 04 Apr 2025 18:21:40 GMT', 'Pragma': 'no-cache, no-cache', 'Content-Type': 'application/octet-stream', 'Location': 'http://hdfs1:50075/webhdfs/v1/somefilewithrandomname222?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true', 'Content-Length': '0', 'Server': 'Jetty(6.1.26)'} (hdfs_api.py:133, req_wrapper) 2025-04-04 18:21:40 [ 632 ] DEBUG : HDFS api response:{'Cache-Control': 'no-cache', 'Expires': 'Fri, 04 Apr 2025 18:21:40 GMT, Fri, 04 Apr 2025 18:21:40 GMT', 'Date': 'Fri, 04 Apr 2025 18:21:40 GMT, Fri, 04 Apr 2025 18:21:40 GMT', 'Pragma': 'no-cache, no-cache', 'Content-Type': 'application/octet-stream', 'Location': 'http://hdfs1:50075/webhdfs/v1/somefilewithrandomname222?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true', 'Content-Length': '0', 'Server': 'Jetty(6.1.26)'} (hdfs_api.py:228, write_data) 2025-04-04 18:21:40 [ 632 ] DEBUG : CALL: {'url': 'http://172.16.1.2:50075/webhdfs/v1/somefilewithrandomname222?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true', 'data': b'1', 'headers': {'content-type': 'text/plain', 'host': '172.16.1.2'}, 'params': {'file': '/somefilewithrandomname222', 'user.name': 'root'}, 'allow_redirects': False, 'verify': False, 'auth': None} (hdfs_api.py:131, req_wrapper) 2025-04-04 18:21:40 [ 632 ] DEBUG : Starting new HTTP connection (1): 172.16.1.2:50075 (connectionpool.py:245, _new_conn) 2025-04-04 18:21:40 [ 632 ] DEBUG : http://172.16.1.2:50075 "PUT /webhdfs/v1/somefilewithrandomname222?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true&file=%2Fsomefilewithrandomname222&user.name=root HTTP/1.1" 403 None (connectionpool.py:547, _make_request) 2025-04-04 18:21:40 [ 632 ] DEBUG : response_data:b'{"RemoteException":{"exception":"RemoteException","javaClassName":"org.apache.hadoop.ipc.RemoteException","message":"Cannot create file/somefilewithrandomname222. Name node is in safe mode.\\nThe reported blocks 31 has reached the threshold 0.9990 of total blocks 31. The number of live datanodes 1 has reached the minimum number 0. In safe mode extension. Safe mode will be turned off automatically in 10 seconds.\\n\\tat org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkNameNodeSafeMode(FSNamesystem.java:1364)\\n\\tat org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFileInt(FSNamesystem.java:2630)\\n\\tat org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFile(FSNamesystem.java:2519)\\n\\tat org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.create(NameNodeRpcServer.java:566)\\n\\tat org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.create(ClientNamenodeProtocolServerSideTranslatorPB.java:394)\\n\\tat org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)\\n\\tat org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:619)\\n\\tat org.apache.hadoop.ipc.RPC$Server.call(RPC.java:962)\\n\\tat org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2039)\\n\\tat org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2035)\\n\\tat java.security.AccessController.doPrivileged(Native Method)\\n\\tat javax.security.auth.Subject.doAs(Subject.java:415)\\n\\tat org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628)\\n\\tat org.apache.hadoop.ipc.Server$Handler.run(Server.java:2033)\\n"}}' headers:{'Cache-Control': 'no-cache', 'Expires': 'Fri, 04 Apr 2025 18:21:40 GMT, Fri, 04 Apr 2025 18:21:40 GMT', 'Date': 'Fri, 04 Apr 2025 18:21:40 GMT, Fri, 04 Apr 2025 18:21:40 GMT', 'Pragma': 'no-cache, no-cache', 'Content-Type': 'application/json', 'Transfer-Encoding': 'chunked', 'Server': 'Jetty(6.1.26)'} (hdfs_api.py:133, req_wrapper) 2025-04-04 18:21:40 [ 632 ] ERROR : unexpected response_data.status_code 403 != 201 (hdfs_api.py:139, req_wrapper) 2025-04-04 18:21:41 [ 632 ] DEBUG : CALL: {'url': 'http://172.16.1.2:50075/webhdfs/v1/somefilewithrandomname222?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true', 'data': b'1', 'headers': {'content-type': 'text/plain', 'host': '172.16.1.2'}, 'params': {'file': '/somefilewithrandomname222', 'user.name': 'root'}, 'allow_redirects': False, 'verify': False, 'auth': None} (hdfs_api.py:131, req_wrapper) 2025-04-04 18:21:41 [ 632 ] DEBUG : Starting new HTTP connection (1): 172.16.1.2:50075 (connectionpool.py:245, _new_conn) 2025-04-04 18:21:41 [ 632 ] DEBUG : http://172.16.1.2:50075 "PUT /webhdfs/v1/somefilewithrandomname222?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true&file=%2Fsomefilewithrandomname222&user.name=root HTTP/1.1" 403 None (connectionpool.py:547, _make_request) 2025-04-04 18:21:41 [ 632 ] DEBUG : response_data:b'{"RemoteException":{"exception":"RemoteException","javaClassName":"org.apache.hadoop.ipc.RemoteException","message":"Cannot create file/somefilewithrandomname222. Name node is in safe mode.\\nThe reported blocks 31 has reached the threshold 0.9990 of total blocks 31. The number of live datanodes 1 has reached the minimum number 0. In safe mode extension. Safe mode will be turned off automatically in 9 seconds.\\n\\tat org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkNameNodeSafeMode(FSNamesystem.java:1364)\\n\\tat org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFileInt(FSNamesystem.java:2630)\\n\\tat org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFile(FSNamesystem.java:2519)\\n\\tat org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.create(NameNodeRpcServer.java:566)\\n\\tat org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.create(ClientNamenodeProtocolServerSideTranslatorPB.java:394)\\n\\tat org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)\\n\\tat org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:619)\\n\\tat org.apache.hadoop.ipc.RPC$Server.call(RPC.java:962)\\n\\tat org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2039)\\n\\tat org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2035)\\n\\tat java.security.AccessController.doPrivileged(Native Method)\\n\\tat javax.security.auth.Subject.doAs(Subject.java:415)\\n\\tat org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628)\\n\\tat org.apache.hadoop.ipc.Server$Handler.run(Server.java:2033)\\n"}}' headers:{'Cache-Control': 'no-cache', 'Expires': 'Fri, 04 Apr 2025 18:21:41 GMT, Fri, 04 Apr 2025 18:21:41 GMT', 'Date': 'Fri, 04 Apr 2025 18:21:41 GMT, Fri, 04 Apr 2025 18:21:41 GMT', 'Pragma': 'no-cache, no-cache', 'Content-Type': 'application/json', 'Transfer-Encoding': 'chunked', 'Server': 'Jetty(6.1.26)'} (hdfs_api.py:133, req_wrapper) 2025-04-04 18:21:41 [ 632 ] ERROR : unexpected response_data.status_code 403 != 201 (hdfs_api.py:139, req_wrapper) 2025-04-04 18:21:42 [ 632 ] ERROR : Can't connect to HDFS or preparations are not done yet 403 Client Error: Forbidden for url: http://172.16.1.2:50075/webhdfs/v1/somefilewithrandomname222?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true&file=%2Fsomefilewithrandomname222&user.name=root (cluster.py:2572, wait_hdfs_to_start) Traceback (most recent call last): File "/ClickHouse/tests/integration/helpers/cluster.py", line 2565, in wait_hdfs_to_start self.hdfs_api.write_data("/somefilewithrandomname222", "1") File "/ClickHouse/tests/integration/helpers/hdfs_api.py", line 242, in write_data response = self.req_wrapper( File "/ClickHouse/tests/integration/helpers/hdfs_api.py", line 143, in req_wrapper response_data.raise_for_status() File "/usr/local/lib/python3.10/dist-packages/requests/models.py", line 1021, in raise_for_status raise HTTPError(http_error_msg, response=self) requests.exceptions.HTTPError: 403 Client Error: Forbidden for url: http://172.16.1.2:50075/webhdfs/v1/somefilewithrandomname222?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true&file=%2Fsomefilewithrandomname222&user.name=root 2025-04-04 18:21:43 [ 632 ] DEBUG : write_data protocol:http host:hdfs1 port:50070 path: /somefilewithrandomname222 user:root, principal:None (hdfs_api.py:194, write_data) 2025-04-04 18:21:43 [ 632 ] DEBUG : CALL: {'url': 'http://172.16.1.2:50070/webhdfs/v1/somefilewithrandomname222?op=CREATE', 'allow_redirects': False, 'headers': {'host': '172.16.1.2'}, 'params': {'overwrite': 'true'}, 'verify': False, 'auth': None} (hdfs_api.py:131, req_wrapper) 2025-04-04 18:21:43 [ 632 ] DEBUG : Starting new HTTP connection (1): 172.16.1.2:50070 (connectionpool.py:245, _new_conn) 2025-04-04 18:21:43 [ 632 ] DEBUG : http://172.16.1.2:50070 "PUT /webhdfs/v1/somefilewithrandomname222?op=CREATE&overwrite=true HTTP/1.1" 307 0 (connectionpool.py:547, _make_request) 2025-04-04 18:21:43 [ 632 ] DEBUG : response_data:b'' headers:{'Cache-Control': 'no-cache', 'Expires': 'Fri, 04 Apr 2025 18:21:43 GMT, Fri, 04 Apr 2025 18:21:43 GMT', 'Date': 'Fri, 04 Apr 2025 18:21:43 GMT, Fri, 04 Apr 2025 18:21:43 GMT', 'Pragma': 'no-cache, no-cache', 'Content-Type': 'application/octet-stream', 'Location': 'http://hdfs1:50075/webhdfs/v1/somefilewithrandomname222?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true', 'Content-Length': '0', 'Server': 'Jetty(6.1.26)'} (hdfs_api.py:133, req_wrapper) 2025-04-04 18:21:43 [ 632 ] DEBUG : HDFS api response:{'Cache-Control': 'no-cache', 'Expires': 'Fri, 04 Apr 2025 18:21:43 GMT, Fri, 04 Apr 2025 18:21:43 GMT', 'Date': 'Fri, 04 Apr 2025 18:21:43 GMT, Fri, 04 Apr 2025 18:21:43 GMT', 'Pragma': 'no-cache, no-cache', 'Content-Type': 'application/octet-stream', 'Location': 'http://hdfs1:50075/webhdfs/v1/somefilewithrandomname222?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true', 'Content-Length': '0', 'Server': 'Jetty(6.1.26)'} (hdfs_api.py:228, write_data) 2025-04-04 18:21:43 [ 632 ] DEBUG : CALL: {'url': 'http://172.16.1.2:50075/webhdfs/v1/somefilewithrandomname222?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true', 'data': b'1', 'headers': {'content-type': 'text/plain', 'host': '172.16.1.2'}, 'params': {'file': '/somefilewithrandomname222', 'user.name': 'root'}, 'allow_redirects': False, 'verify': False, 'auth': None} (hdfs_api.py:131, req_wrapper) 2025-04-04 18:21:43 [ 632 ] DEBUG : Starting new HTTP connection (1): 172.16.1.2:50075 (connectionpool.py:245, _new_conn) 2025-04-04 18:21:43 [ 632 ] DEBUG : http://172.16.1.2:50075 "PUT /webhdfs/v1/somefilewithrandomname222?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true&file=%2Fsomefilewithrandomname222&user.name=root HTTP/1.1" 403 None (connectionpool.py:547, _make_request) 2025-04-04 18:21:43 [ 632 ] DEBUG : response_data:b'{"RemoteException":{"exception":"RemoteException","javaClassName":"org.apache.hadoop.ipc.RemoteException","message":"Cannot create file/somefilewithrandomname222. Name node is in safe mode.\\nThe reported blocks 31 has reached the threshold 0.9990 of total blocks 31. The number of live datanodes 1 has reached the minimum number 0. In safe mode extension. Safe mode will be turned off automatically in 6 seconds.\\n\\tat org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkNameNodeSafeMode(FSNamesystem.java:1364)\\n\\tat org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFileInt(FSNamesystem.java:2630)\\n\\tat org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFile(FSNamesystem.java:2519)\\n\\tat org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.create(NameNodeRpcServer.java:566)\\n\\tat org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.create(ClientNamenodeProtocolServerSideTranslatorPB.java:394)\\n\\tat org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)\\n\\tat org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:619)\\n\\tat org.apache.hadoop.ipc.RPC$Server.call(RPC.java:962)\\n\\tat org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2039)\\n\\tat org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2035)\\n\\tat java.security.AccessController.doPrivileged(Native Method)\\n\\tat javax.security.auth.Subject.doAs(Subject.java:415)\\n\\tat org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628)\\n\\tat org.apache.hadoop.ipc.Server$Handler.run(Server.java:2033)\\n"}}' headers:{'Cache-Control': 'no-cache', 'Expires': 'Fri, 04 Apr 2025 18:21:43 GMT, Fri, 04 Apr 2025 18:21:43 GMT', 'Date': 'Fri, 04 Apr 2025 18:21:43 GMT, Fri, 04 Apr 2025 18:21:43 GMT', 'Pragma': 'no-cache, no-cache', 'Content-Type': 'application/json', 'Transfer-Encoding': 'chunked', 'Server': 'Jetty(6.1.26)'} (hdfs_api.py:133, req_wrapper) 2025-04-04 18:21:43 [ 632 ] ERROR : unexpected response_data.status_code 403 != 201 (hdfs_api.py:139, req_wrapper) 2025-04-04 18:21:44 [ 632 ] DEBUG : CALL: {'url': 'http://172.16.1.2:50075/webhdfs/v1/somefilewithrandomname222?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true', 'data': b'1', 'headers': {'content-type': 'text/plain', 'host': '172.16.1.2'}, 'params': {'file': '/somefilewithrandomname222', 'user.name': 'root'}, 'allow_redirects': False, 'verify': False, 'auth': None} (hdfs_api.py:131, req_wrapper) 2025-04-04 18:21:44 [ 632 ] DEBUG : Starting new HTTP connection (1): 172.16.1.2:50075 (connectionpool.py:245, _new_conn) 2025-04-04 18:21:44 [ 632 ] DEBUG : http://172.16.1.2:50075 "PUT /webhdfs/v1/somefilewithrandomname222?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true&file=%2Fsomefilewithrandomname222&user.name=root HTTP/1.1" 403 None (connectionpool.py:547, _make_request) 2025-04-04 18:21:44 [ 632 ] DEBUG : response_data:b'{"RemoteException":{"exception":"RemoteException","javaClassName":"org.apache.hadoop.ipc.RemoteException","message":"Cannot create file/somefilewithrandomname222. Name node is in safe mode.\\nThe reported blocks 31 has reached the threshold 0.9990 of total blocks 31. The number of live datanodes 1 has reached the minimum number 0. In safe mode extension. Safe mode will be turned off automatically in 5 seconds.\\n\\tat org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkNameNodeSafeMode(FSNamesystem.java:1364)\\n\\tat org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFileInt(FSNamesystem.java:2630)\\n\\tat org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFile(FSNamesystem.java:2519)\\n\\tat org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.create(NameNodeRpcServer.java:566)\\n\\tat org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.create(ClientNamenodeProtocolServerSideTranslatorPB.java:394)\\n\\tat org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)\\n\\tat org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:619)\\n\\tat org.apache.hadoop.ipc.RPC$Server.call(RPC.java:962)\\n\\tat org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2039)\\n\\tat org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2035)\\n\\tat java.security.AccessController.doPrivileged(Native Method)\\n\\tat javax.security.auth.Subject.doAs(Subject.java:415)\\n\\tat org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628)\\n\\tat org.apache.hadoop.ipc.Server$Handler.run(Server.java:2033)\\n"}}' headers:{'Cache-Control': 'no-cache', 'Expires': 'Fri, 04 Apr 2025 18:21:44 GMT, Fri, 04 Apr 2025 18:21:44 GMT', 'Date': 'Fri, 04 Apr 2025 18:21:44 GMT, Fri, 04 Apr 2025 18:21:44 GMT', 'Pragma': 'no-cache, no-cache', 'Content-Type': 'application/json', 'Transfer-Encoding': 'chunked', 'Server': 'Jetty(6.1.26)'} (hdfs_api.py:133, req_wrapper) 2025-04-04 18:21:44 [ 632 ] ERROR : unexpected response_data.status_code 403 != 201 (hdfs_api.py:139, req_wrapper) 2025-04-04 18:21:45 [ 632 ] ERROR : Can't connect to HDFS or preparations are not done yet 403 Client Error: Forbidden for url: http://172.16.1.2:50075/webhdfs/v1/somefilewithrandomname222?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true&file=%2Fsomefilewithrandomname222&user.name=root (cluster.py:2572, wait_hdfs_to_start) Traceback (most recent call last): File "/ClickHouse/tests/integration/helpers/cluster.py", line 2565, in wait_hdfs_to_start self.hdfs_api.write_data("/somefilewithrandomname222", "1") File "/ClickHouse/tests/integration/helpers/hdfs_api.py", line 242, in write_data response = self.req_wrapper( File "/ClickHouse/tests/integration/helpers/hdfs_api.py", line 143, in req_wrapper response_data.raise_for_status() File "/usr/local/lib/python3.10/dist-packages/requests/models.py", line 1021, in raise_for_status raise HTTPError(http_error_msg, response=self) requests.exceptions.HTTPError: 403 Client Error: Forbidden for url: http://172.16.1.2:50075/webhdfs/v1/somefilewithrandomname222?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true&file=%2Fsomefilewithrandomname222&user.name=root 2025-04-04 18:21:47 [ 632 ] DEBUG : write_data protocol:http host:hdfs1 port:50070 path: /somefilewithrandomname222 user:root, principal:None (hdfs_api.py:194, write_data) 2025-04-04 18:21:47 [ 632 ] DEBUG : CALL: {'url': 'http://172.16.1.2:50070/webhdfs/v1/somefilewithrandomname222?op=CREATE', 'allow_redirects': False, 'headers': {'host': '172.16.1.2'}, 'params': {'overwrite': 'true'}, 'verify': False, 'auth': None} (hdfs_api.py:131, req_wrapper) 2025-04-04 18:21:47 [ 632 ] DEBUG : Starting new HTTP connection (1): 172.16.1.2:50070 (connectionpool.py:245, _new_conn) 2025-04-04 18:21:47 [ 632 ] DEBUG : http://172.16.1.2:50070 "PUT /webhdfs/v1/somefilewithrandomname222?op=CREATE&overwrite=true HTTP/1.1" 307 0 (connectionpool.py:547, _make_request) 2025-04-04 18:21:47 [ 632 ] DEBUG : response_data:b'' headers:{'Cache-Control': 'no-cache', 'Expires': 'Fri, 04 Apr 2025 18:21:47 GMT, Fri, 04 Apr 2025 18:21:47 GMT', 'Date': 'Fri, 04 Apr 2025 18:21:47 GMT, Fri, 04 Apr 2025 18:21:47 GMT', 'Pragma': 'no-cache, no-cache', 'Content-Type': 'application/octet-stream', 'Location': 'http://hdfs1:50075/webhdfs/v1/somefilewithrandomname222?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true', 'Content-Length': '0', 'Server': 'Jetty(6.1.26)'} (hdfs_api.py:133, req_wrapper) 2025-04-04 18:21:47 [ 632 ] DEBUG : HDFS api response:{'Cache-Control': 'no-cache', 'Expires': 'Fri, 04 Apr 2025 18:21:47 GMT, Fri, 04 Apr 2025 18:21:47 GMT', 'Date': 'Fri, 04 Apr 2025 18:21:47 GMT, Fri, 04 Apr 2025 18:21:47 GMT', 'Pragma': 'no-cache, no-cache', 'Content-Type': 'application/octet-stream', 'Location': 'http://hdfs1:50075/webhdfs/v1/somefilewithrandomname222?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true', 'Content-Length': '0', 'Server': 'Jetty(6.1.26)'} (hdfs_api.py:228, write_data) 2025-04-04 18:21:47 [ 632 ] DEBUG : CALL: {'url': 'http://172.16.1.2:50075/webhdfs/v1/somefilewithrandomname222?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true', 'data': b'1', 'headers': {'content-type': 'text/plain', 'host': '172.16.1.2'}, 'params': {'file': '/somefilewithrandomname222', 'user.name': 'root'}, 'allow_redirects': False, 'verify': False, 'auth': None} (hdfs_api.py:131, req_wrapper) 2025-04-04 18:21:47 [ 632 ] DEBUG : Starting new HTTP connection (1): 172.16.1.2:50075 (connectionpool.py:245, _new_conn) 2025-04-04 18:21:47 [ 632 ] DEBUG : http://172.16.1.2:50075 "PUT /webhdfs/v1/somefilewithrandomname222?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true&file=%2Fsomefilewithrandomname222&user.name=root HTTP/1.1" 403 None (connectionpool.py:547, _make_request) 2025-04-04 18:21:47 [ 632 ] DEBUG : response_data:b'{"RemoteException":{"exception":"RemoteException","javaClassName":"org.apache.hadoop.ipc.RemoteException","message":"Cannot create file/somefilewithrandomname222. Name node is in safe mode.\\nThe reported blocks 31 has reached the threshold 0.9990 of total blocks 31. The number of live datanodes 1 has reached the minimum number 0. In safe mode extension. Safe mode will be turned off automatically in 3 seconds.\\n\\tat org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkNameNodeSafeMode(FSNamesystem.java:1364)\\n\\tat org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFileInt(FSNamesystem.java:2630)\\n\\tat org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFile(FSNamesystem.java:2519)\\n\\tat org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.create(NameNodeRpcServer.java:566)\\n\\tat org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.create(ClientNamenodeProtocolServerSideTranslatorPB.java:394)\\n\\tat org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)\\n\\tat org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:619)\\n\\tat org.apache.hadoop.ipc.RPC$Server.call(RPC.java:962)\\n\\tat org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2039)\\n\\tat org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2035)\\n\\tat java.security.AccessController.doPrivileged(Native Method)\\n\\tat javax.security.auth.Subject.doAs(Subject.java:415)\\n\\tat org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628)\\n\\tat org.apache.hadoop.ipc.Server$Handler.run(Server.java:2033)\\n"}}' headers:{'Cache-Control': 'no-cache', 'Expires': 'Fri, 04 Apr 2025 18:21:47 GMT, Fri, 04 Apr 2025 18:21:47 GMT', 'Date': 'Fri, 04 Apr 2025 18:21:47 GMT, Fri, 04 Apr 2025 18:21:47 GMT', 'Pragma': 'no-cache, no-cache', 'Content-Type': 'application/json', 'Transfer-Encoding': 'chunked', 'Server': 'Jetty(6.1.26)'} (hdfs_api.py:133, req_wrapper) 2025-04-04 18:21:47 [ 632 ] ERROR : unexpected response_data.status_code 403 != 201 (hdfs_api.py:139, req_wrapper) 2025-04-04 18:21:48 [ 632 ] DEBUG : CALL: {'url': 'http://172.16.1.2:50075/webhdfs/v1/somefilewithrandomname222?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true', 'data': b'1', 'headers': {'content-type': 'text/plain', 'host': '172.16.1.2'}, 'params': {'file': '/somefilewithrandomname222', 'user.name': 'root'}, 'allow_redirects': False, 'verify': False, 'auth': None} (hdfs_api.py:131, req_wrapper) 2025-04-04 18:21:48 [ 632 ] DEBUG : Starting new HTTP connection (1): 172.16.1.2:50075 (connectionpool.py:245, _new_conn) 2025-04-04 18:21:48 [ 632 ] DEBUG : http://172.16.1.2:50075 "PUT /webhdfs/v1/somefilewithrandomname222?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true&file=%2Fsomefilewithrandomname222&user.name=root HTTP/1.1" 403 None (connectionpool.py:547, _make_request) 2025-04-04 18:21:48 [ 632 ] DEBUG : response_data:b'{"RemoteException":{"exception":"RemoteException","javaClassName":"org.apache.hadoop.ipc.RemoteException","message":"Cannot create file/somefilewithrandomname222. Name node is in safe mode.\\nThe reported blocks 31 has reached the threshold 0.9990 of total blocks 31. The number of live datanodes 1 has reached the minimum number 0. In safe mode extension. Safe mode will be turned off automatically in 2 seconds.\\n\\tat org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkNameNodeSafeMode(FSNamesystem.java:1364)\\n\\tat org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFileInt(FSNamesystem.java:2630)\\n\\tat org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFile(FSNamesystem.java:2519)\\n\\tat org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.create(NameNodeRpcServer.java:566)\\n\\tat org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.create(ClientNamenodeProtocolServerSideTranslatorPB.java:394)\\n\\tat org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)\\n\\tat org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:619)\\n\\tat org.apache.hadoop.ipc.RPC$Server.call(RPC.java:962)\\n\\tat org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2039)\\n\\tat org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2035)\\n\\tat java.security.AccessController.doPrivileged(Native Method)\\n\\tat javax.security.auth.Subject.doAs(Subject.java:415)\\n\\tat org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628)\\n\\tat org.apache.hadoop.ipc.Server$Handler.run(Server.java:2033)\\n"}}' headers:{'Cache-Control': 'no-cache', 'Expires': 'Fri, 04 Apr 2025 18:21:48 GMT, Fri, 04 Apr 2025 18:21:48 GMT', 'Date': 'Fri, 04 Apr 2025 18:21:48 GMT, Fri, 04 Apr 2025 18:21:48 GMT', 'Pragma': 'no-cache, no-cache', 'Content-Type': 'application/json', 'Transfer-Encoding': 'chunked', 'Server': 'Jetty(6.1.26)'} (hdfs_api.py:133, req_wrapper) 2025-04-04 18:21:48 [ 632 ] ERROR : unexpected response_data.status_code 403 != 201 (hdfs_api.py:139, req_wrapper) 2025-04-04 18:21:49 [ 632 ] ERROR : Can't connect to HDFS or preparations are not done yet 403 Client Error: Forbidden for url: http://172.16.1.2:50075/webhdfs/v1/somefilewithrandomname222?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true&file=%2Fsomefilewithrandomname222&user.name=root (cluster.py:2572, wait_hdfs_to_start) Traceback (most recent call last): File "/ClickHouse/tests/integration/helpers/cluster.py", line 2565, in wait_hdfs_to_start self.hdfs_api.write_data("/somefilewithrandomname222", "1") File "/ClickHouse/tests/integration/helpers/hdfs_api.py", line 242, in write_data response = self.req_wrapper( File "/ClickHouse/tests/integration/helpers/hdfs_api.py", line 143, in req_wrapper response_data.raise_for_status() File "/usr/local/lib/python3.10/dist-packages/requests/models.py", line 1021, in raise_for_status raise HTTPError(http_error_msg, response=self) requests.exceptions.HTTPError: 403 Client Error: Forbidden for url: http://172.16.1.2:50075/webhdfs/v1/somefilewithrandomname222?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true&file=%2Fsomefilewithrandomname222&user.name=root 2025-04-04 18:21:50 [ 632 ] DEBUG : write_data protocol:http host:hdfs1 port:50070 path: /somefilewithrandomname222 user:root, principal:None (hdfs_api.py:194, write_data) 2025-04-04 18:21:50 [ 632 ] DEBUG : CALL: {'url': 'http://172.16.1.2:50070/webhdfs/v1/somefilewithrandomname222?op=CREATE', 'allow_redirects': False, 'headers': {'host': '172.16.1.2'}, 'params': {'overwrite': 'true'}, 'verify': False, 'auth': None} (hdfs_api.py:131, req_wrapper) 2025-04-04 18:21:50 [ 632 ] DEBUG : Starting new HTTP connection (1): 172.16.1.2:50070 (connectionpool.py:245, _new_conn) 2025-04-04 18:21:50 [ 632 ] DEBUG : http://172.16.1.2:50070 "PUT /webhdfs/v1/somefilewithrandomname222?op=CREATE&overwrite=true HTTP/1.1" 307 0 (connectionpool.py:547, _make_request) 2025-04-04 18:21:50 [ 632 ] DEBUG : response_data:b'' headers:{'Cache-Control': 'no-cache', 'Expires': 'Fri, 04 Apr 2025 18:21:50 GMT, Fri, 04 Apr 2025 18:21:50 GMT', 'Date': 'Fri, 04 Apr 2025 18:21:50 GMT, Fri, 04 Apr 2025 18:21:50 GMT', 'Pragma': 'no-cache, no-cache', 'Content-Type': 'application/octet-stream', 'Location': 'http://hdfs1:50075/webhdfs/v1/somefilewithrandomname222?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true', 'Content-Length': '0', 'Server': 'Jetty(6.1.26)'} (hdfs_api.py:133, req_wrapper) 2025-04-04 18:21:50 [ 632 ] DEBUG : HDFS api response:{'Cache-Control': 'no-cache', 'Expires': 'Fri, 04 Apr 2025 18:21:50 GMT, Fri, 04 Apr 2025 18:21:50 GMT', 'Date': 'Fri, 04 Apr 2025 18:21:50 GMT, Fri, 04 Apr 2025 18:21:50 GMT', 'Pragma': 'no-cache, no-cache', 'Content-Type': 'application/octet-stream', 'Location': 'http://hdfs1:50075/webhdfs/v1/somefilewithrandomname222?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true', 'Content-Length': '0', 'Server': 'Jetty(6.1.26)'} (hdfs_api.py:228, write_data) 2025-04-04 18:21:50 [ 632 ] DEBUG : CALL: {'url': 'http://172.16.1.2:50075/webhdfs/v1/somefilewithrandomname222?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true', 'data': b'1', 'headers': {'content-type': 'text/plain', 'host': '172.16.1.2'}, 'params': {'file': '/somefilewithrandomname222', 'user.name': 'root'}, 'allow_redirects': False, 'verify': False, 'auth': None} (hdfs_api.py:131, req_wrapper) 2025-04-04 18:21:50 [ 632 ] DEBUG : Starting new HTTP connection (1): 172.16.1.2:50075 (connectionpool.py:245, _new_conn) 2025-04-04 18:21:50 [ 632 ] DEBUG : http://172.16.1.2:50075 "PUT /webhdfs/v1/somefilewithrandomname222?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true&file=%2Fsomefilewithrandomname222&user.name=root HTTP/1.1" 403 None (connectionpool.py:547, _make_request) 2025-04-04 18:21:50 [ 632 ] DEBUG : response_data:b'{"RemoteException":{"exception":"RemoteException","javaClassName":"org.apache.hadoop.ipc.RemoteException","message":"Cannot create file/somefilewithrandomname222. Name node is in safe mode.\\nThe reported blocks 31 has reached the threshold 0.9990 of total blocks 31. The number of live datanodes 1 has reached the minimum number 0. In safe mode extension. Safe mode will be turned off automatically in 0 seconds.\\n\\tat org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkNameNodeSafeMode(FSNamesystem.java:1364)\\n\\tat org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFileInt(FSNamesystem.java:2630)\\n\\tat org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFile(FSNamesystem.java:2519)\\n\\tat org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.create(NameNodeRpcServer.java:566)\\n\\tat org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.create(ClientNamenodeProtocolServerSideTranslatorPB.java:394)\\n\\tat org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)\\n\\tat org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:619)\\n\\tat org.apache.hadoop.ipc.RPC$Server.call(RPC.java:962)\\n\\tat org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2039)\\n\\tat org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2035)\\n\\tat java.security.AccessController.doPrivileged(Native Method)\\n\\tat javax.security.auth.Subject.doAs(Subject.java:415)\\n\\tat org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628)\\n\\tat org.apache.hadoop.ipc.Server$Handler.run(Server.java:2033)\\n"}}' headers:{'Cache-Control': 'no-cache', 'Expires': 'Fri, 04 Apr 2025 18:21:50 GMT, Fri, 04 Apr 2025 18:21:50 GMT', 'Date': 'Fri, 04 Apr 2025 18:21:50 GMT, Fri, 04 Apr 2025 18:21:50 GMT', 'Pragma': 'no-cache, no-cache', 'Content-Type': 'application/json', 'Transfer-Encoding': 'chunked', 'Server': 'Jetty(6.1.26)'} (hdfs_api.py:133, req_wrapper) 2025-04-04 18:21:50 [ 632 ] ERROR : unexpected response_data.status_code 403 != 201 (hdfs_api.py:139, req_wrapper) 2025-04-04 18:21:51 [ 632 ] DEBUG : CALL: {'url': 'http://172.16.1.2:50075/webhdfs/v1/somefilewithrandomname222?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true', 'data': b'1', 'headers': {'content-type': 'text/plain', 'host': '172.16.1.2'}, 'params': {'file': '/somefilewithrandomname222', 'user.name': 'root'}, 'allow_redirects': False, 'verify': False, 'auth': None} (hdfs_api.py:131, req_wrapper) 2025-04-04 18:21:51 [ 632 ] DEBUG : Starting new HTTP connection (1): 172.16.1.2:50075 (connectionpool.py:245, _new_conn) 2025-04-04 18:21:51 [ 632 ] DEBUG : http://172.16.1.2:50075 "PUT /webhdfs/v1/somefilewithrandomname222?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true&file=%2Fsomefilewithrandomname222&user.name=root HTTP/1.1" 201 0 (connectionpool.py:547, _make_request) 2025-04-04 18:21:51 [ 632 ] DEBUG : response_data:b'' headers:{'Cache-Control': 'no-cache', 'Expires': 'Fri, 04 Apr 2025 18:21:51 GMT, Fri, 04 Apr 2025 18:21:51 GMT', 'Date': 'Fri, 04 Apr 2025 18:21:51 GMT, Fri, 04 Apr 2025 18:21:51 GMT', 'Pragma': 'no-cache, no-cache', 'Content-Type': 'application/octet-stream', 'Location': 'webhdfs://hdfs1:9000/somefilewithrandomname222', 'Content-Length': '0', 'Server': 'Jetty(6.1.26)'} (hdfs_api.py:133, req_wrapper) 2025-04-04 18:21:51 [ 632 ] DEBUG : b'' {'Cache-Control': 'no-cache', 'Expires': 'Fri, 04 Apr 2025 18:21:51 GMT, Fri, 04 Apr 2025 18:21:51 GMT', 'Date': 'Fri, 04 Apr 2025 18:21:51 GMT, Fri, 04 Apr 2025 18:21:51 GMT', 'Pragma': 'no-cache, no-cache', 'Content-Type': 'application/octet-stream', 'Location': 'webhdfs://hdfs1:9000/somefilewithrandomname222', 'Content-Length': '0', 'Server': 'Jetty(6.1.26)'} (hdfs_api.py:253, write_data) 2025-04-04 18:21:51 [ 632 ] DEBUG : Connected to HDFS and SafeMode disabled! (cluster.py:2566, wait_hdfs_to_start) 2025-04-04 18:21:51 [ 632 ] DEBUG : ('Trying to create ClickHouse instance by command %s', 'docker compose --env-file /ClickHouse/tests/integration/test_storage_hdfs/_instances-2/.env --project-name rootteststoragehdfs --file /ClickHouse/tests/integration/test_storage_hdfs/_instances-2/node1/docker-compose.yml --file /ClickHouse/tests/integration/helpers/../../../tests/integration/compose/docker_compose_hdfs.yml up -d --no-recreate') (cluster.py:3200, start) 2025-04-04 18:21:51 [ 632 ] DEBUG : Command:[docker compose --env-file /ClickHouse/tests/integration/test_storage_hdfs/_instances-2/.env --project-name rootteststoragehdfs --file /ClickHouse/tests/integration/test_storage_hdfs/_instances-2/node1/docker-compose.yml --file /ClickHouse/tests/integration/helpers/../../../tests/integration/compose/docker_compose_hdfs.yml up -d --no-recreate] (cluster.py:122, run_and_check) 2025-04-04 18:21:52 [ 632 ] DEBUG : Stderr: Container rootteststoragehdfs-hdfs1-1 Running (cluster.py:148, run_and_check) 2025-04-04 18:21:52 [ 632 ] DEBUG : Stderr: Container rootteststoragehdfs-node1-1 Creating (cluster.py:148, run_and_check) 2025-04-04 18:21:52 [ 632 ] DEBUG : Stderr: Container rootteststoragehdfs-node1-1 Created (cluster.py:148, run_and_check) 2025-04-04 18:21:52 [ 632 ] DEBUG : Stderr: Container rootteststoragehdfs-node1-1 Starting (cluster.py:148, run_and_check) 2025-04-04 18:21:52 [ 632 ] DEBUG : Stderr: Container rootteststoragehdfs-node1-1 Started (cluster.py:148, run_and_check) 2025-04-04 18:21:52 [ 632 ] DEBUG : ClickHouse instance created (cluster.py:3208, start) 2025-04-04 18:21:52 [ 632 ] DEBUG : get_instance_ip instance_name=node1 (cluster.py:2082, get_instance_ip) 2025-04-04 18:21:52 [ 632 ] DEBUG : http://localhost:None "GET /v1.46/containers/rootteststoragehdfs-node1-1/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2025-04-04 18:21:52 [ 632 ] DEBUG : get_instance_ip instance_name=node1 (cluster.py:2092, get_instance_global_ipv6) 2025-04-04 18:21:52 [ 632 ] DEBUG : http://localhost:None "GET /v1.46/containers/rootteststoragehdfs-node1-1/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2025-04-04 18:21:52 [ 632 ] DEBUG : Waiting for ClickHouse start in node1, ip: 172.16.1.3... (cluster.py:3216, start) 2025-04-04 18:21:52 [ 632 ] DEBUG : http://localhost:None "GET /v1.46/containers/rootteststoragehdfs-node1-1/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2025-04-04 18:21:52 [ 632 ] DEBUG : http://localhost:None "GET /v1.46/containers/3ff00a6739a5ba729464e48eba4d63034691b5f1e5bbdf9260e5a5e860027bda/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2025-04-04 18:21:52 [ 632 ] DEBUG : http://localhost:None "GET /v1.46/containers/3ff00a6739a5ba729464e48eba4d63034691b5f1e5bbdf9260e5a5e860027bda/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2025-04-04 18:21:52 [ 632 ] DEBUG : http://localhost:None "GET /v1.46/containers/3ff00a6739a5ba729464e48eba4d63034691b5f1e5bbdf9260e5a5e860027bda/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2025-04-04 18:21:52 [ 632 ] DEBUG : http://localhost:None "GET /v1.46/containers/3ff00a6739a5ba729464e48eba4d63034691b5f1e5bbdf9260e5a5e860027bda/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2025-04-04 18:21:52 [ 632 ] DEBUG : http://localhost:None "GET /v1.46/containers/3ff00a6739a5ba729464e48eba4d63034691b5f1e5bbdf9260e5a5e860027bda/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2025-04-04 18:21:52 [ 632 ] DEBUG : ClickHouse node1 started (cluster.py:3220, start) ------------------------------ Captured log call ------------------------------- 2025-04-04 18:21:52 [ 632 ] INFO : GETFILESTATUS /test_hdfsCluster user.name=root 172.16.1.2:50070 (__init__.py:412, _request) 2025-04-04 18:21:52 [ 632 ] DEBUG : Starting new HTTP connection (1): 172.16.1.2:50070 (connectionpool.py:245, _new_conn) 2025-04-04 18:21:52 [ 632 ] DEBUG : http://172.16.1.2:50070 "GET /webhdfs/v1/test_hdfsCluster?user.name=root&op=GETFILESTATUS HTTP/1.1" 404 None (connectionpool.py:547, _make_request) 2025-04-04 18:21:52 [ 632 ] INFO : MKDIRS /test_hdfsCluster user.name=root 172.16.1.2:50070 (__init__.py:412, _request) 2025-04-04 18:21:52 [ 632 ] DEBUG : Starting new HTTP connection (1): 172.16.1.2:50070 (connectionpool.py:245, _new_conn) 2025-04-04 18:21:52 [ 632 ] DEBUG : http://172.16.1.2:50070 "PUT /webhdfs/v1/test_hdfsCluster?user.name=root&op=MKDIRS HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2025-04-04 18:21:52 [ 632 ] DEBUG : write_data protocol:http host:hdfs1 port:50070 path: /test_hdfsCluster/file1 user:root, principal:None (hdfs_api.py:194, write_data) 2025-04-04 18:21:52 [ 632 ] DEBUG : CALL: {'url': 'http://172.16.1.2:50070/webhdfs/v1/test_hdfsCluster/file1?op=CREATE', 'allow_redirects': False, 'headers': {'host': '172.16.1.2'}, 'params': {'overwrite': 'true'}, 'verify': False, 'auth': None} (hdfs_api.py:131, req_wrapper) 2025-04-04 18:21:52 [ 632 ] DEBUG : Starting new HTTP connection (1): 172.16.1.2:50070 (connectionpool.py:245, _new_conn) 2025-04-04 18:21:52 [ 632 ] DEBUG : http://172.16.1.2:50070 "PUT /webhdfs/v1/test_hdfsCluster/file1?op=CREATE&overwrite=true HTTP/1.1" 307 0 (connectionpool.py:547, _make_request) 2025-04-04 18:21:52 [ 632 ] DEBUG : response_data:b'' headers:{'Cache-Control': 'no-cache', 'Expires': 'Fri, 04 Apr 2025 18:21:52 GMT, Fri, 04 Apr 2025 18:21:52 GMT', 'Date': 'Fri, 04 Apr 2025 18:21:52 GMT, Fri, 04 Apr 2025 18:21:52 GMT', 'Pragma': 'no-cache, no-cache', 'Content-Type': 'application/octet-stream', 'Location': 'http://hdfs1:50075/webhdfs/v1/test_hdfsCluster/file1?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true', 'Content-Length': '0', 'Server': 'Jetty(6.1.26)'} (hdfs_api.py:133, req_wrapper) 2025-04-04 18:21:52 [ 632 ] DEBUG : HDFS api response:{'Cache-Control': 'no-cache', 'Expires': 'Fri, 04 Apr 2025 18:21:52 GMT, Fri, 04 Apr 2025 18:21:52 GMT', 'Date': 'Fri, 04 Apr 2025 18:21:52 GMT, Fri, 04 Apr 2025 18:21:52 GMT', 'Pragma': 'no-cache, no-cache', 'Content-Type': 'application/octet-stream', 'Location': 'http://hdfs1:50075/webhdfs/v1/test_hdfsCluster/file1?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true', 'Content-Length': '0', 'Server': 'Jetty(6.1.26)'} (hdfs_api.py:228, write_data) 2025-04-04 18:21:52 [ 632 ] DEBUG : CALL: {'url': 'http://172.16.1.2:50075/webhdfs/v1/test_hdfsCluster/file1?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true', 'data': b'1\n', 'headers': {'content-type': 'text/plain', 'host': '172.16.1.2'}, 'params': {'file': '/test_hdfsCluster/file1', 'user.name': 'root'}, 'allow_redirects': False, 'verify': False, 'auth': None} (hdfs_api.py:131, req_wrapper) 2025-04-04 18:21:52 [ 632 ] DEBUG : Starting new HTTP connection (1): 172.16.1.2:50075 (connectionpool.py:245, _new_conn) 2025-04-04 18:21:52 [ 632 ] DEBUG : http://172.16.1.2:50075 "PUT /webhdfs/v1/test_hdfsCluster/file1?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true&file=%2Ftest_hdfsCluster%2Ffile1&user.name=root HTTP/1.1" 201 0 (connectionpool.py:547, _make_request) 2025-04-04 18:21:52 [ 632 ] DEBUG : response_data:b'' headers:{'Cache-Control': 'no-cache', 'Expires': 'Fri, 04 Apr 2025 18:21:52 GMT, Fri, 04 Apr 2025 18:21:52 GMT', 'Date': 'Fri, 04 Apr 2025 18:21:52 GMT, Fri, 04 Apr 2025 18:21:52 GMT', 'Pragma': 'no-cache, no-cache', 'Content-Type': 'application/octet-stream', 'Location': 'webhdfs://hdfs1:9000/test_hdfsCluster/file1', 'Content-Length': '0', 'Server': 'Jetty(6.1.26)'} (hdfs_api.py:133, req_wrapper) 2025-04-04 18:21:52 [ 632 ] DEBUG : b'' {'Cache-Control': 'no-cache', 'Expires': 'Fri, 04 Apr 2025 18:21:52 GMT, Fri, 04 Apr 2025 18:21:52 GMT', 'Date': 'Fri, 04 Apr 2025 18:21:52 GMT, Fri, 04 Apr 2025 18:21:52 GMT', 'Pragma': 'no-cache, no-cache', 'Content-Type': 'application/octet-stream', 'Location': 'webhdfs://hdfs1:9000/test_hdfsCluster/file1', 'Content-Length': '0', 'Server': 'Jetty(6.1.26)'} (hdfs_api.py:253, write_data) 2025-04-04 18:21:52 [ 632 ] DEBUG : write_data protocol:http host:hdfs1 port:50070 path: /test_hdfsCluster/file2 user:root, principal:None (hdfs_api.py:194, write_data) 2025-04-04 18:21:52 [ 632 ] DEBUG : CALL: {'url': 'http://172.16.1.2:50070/webhdfs/v1/test_hdfsCluster/file2?op=CREATE', 'allow_redirects': False, 'headers': {'host': '172.16.1.2'}, 'params': {'overwrite': 'true'}, 'verify': False, 'auth': None} (hdfs_api.py:131, req_wrapper) 2025-04-04 18:21:52 [ 632 ] DEBUG : Starting new HTTP connection (1): 172.16.1.2:50070 (connectionpool.py:245, _new_conn) 2025-04-04 18:21:52 [ 632 ] DEBUG : http://172.16.1.2:50070 "PUT /webhdfs/v1/test_hdfsCluster/file2?op=CREATE&overwrite=true HTTP/1.1" 307 0 (connectionpool.py:547, _make_request) 2025-04-04 18:21:52 [ 632 ] DEBUG : response_data:b'' headers:{'Cache-Control': 'no-cache', 'Expires': 'Fri, 04 Apr 2025 18:21:52 GMT, Fri, 04 Apr 2025 18:21:52 GMT', 'Date': 'Fri, 04 Apr 2025 18:21:52 GMT, Fri, 04 Apr 2025 18:21:52 GMT', 'Pragma': 'no-cache, no-cache', 'Content-Type': 'application/octet-stream', 'Location': 'http://hdfs1:50075/webhdfs/v1/test_hdfsCluster/file2?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true', 'Content-Length': '0', 'Server': 'Jetty(6.1.26)'} (hdfs_api.py:133, req_wrapper) 2025-04-04 18:21:52 [ 632 ] DEBUG : HDFS api response:{'Cache-Control': 'no-cache', 'Expires': 'Fri, 04 Apr 2025 18:21:52 GMT, Fri, 04 Apr 2025 18:21:52 GMT', 'Date': 'Fri, 04 Apr 2025 18:21:52 GMT, Fri, 04 Apr 2025 18:21:52 GMT', 'Pragma': 'no-cache, no-cache', 'Content-Type': 'application/octet-stream', 'Location': 'http://hdfs1:50075/webhdfs/v1/test_hdfsCluster/file2?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true', 'Content-Length': '0', 'Server': 'Jetty(6.1.26)'} (hdfs_api.py:228, write_data) 2025-04-04 18:21:52 [ 632 ] DEBUG : CALL: {'url': 'http://172.16.1.2:50075/webhdfs/v1/test_hdfsCluster/file2?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true', 'data': b'2\n', 'headers': {'content-type': 'text/plain', 'host': '172.16.1.2'}, 'params': {'file': '/test_hdfsCluster/file2', 'user.name': 'root'}, 'allow_redirects': False, 'verify': False, 'auth': None} (hdfs_api.py:131, req_wrapper) 2025-04-04 18:21:52 [ 632 ] DEBUG : Starting new HTTP connection (1): 172.16.1.2:50075 (connectionpool.py:245, _new_conn) 2025-04-04 18:21:52 [ 632 ] DEBUG : http://172.16.1.2:50075 "PUT /webhdfs/v1/test_hdfsCluster/file2?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true&file=%2Ftest_hdfsCluster%2Ffile2&user.name=root HTTP/1.1" 201 0 (connectionpool.py:547, _make_request) 2025-04-04 18:21:52 [ 632 ] DEBUG : response_data:b'' headers:{'Cache-Control': 'no-cache', 'Expires': 'Fri, 04 Apr 2025 18:21:52 GMT, Fri, 04 Apr 2025 18:21:52 GMT', 'Date': 'Fri, 04 Apr 2025 18:21:52 GMT, Fri, 04 Apr 2025 18:21:52 GMT', 'Pragma': 'no-cache, no-cache', 'Content-Type': 'application/octet-stream', 'Location': 'webhdfs://hdfs1:9000/test_hdfsCluster/file2', 'Content-Length': '0', 'Server': 'Jetty(6.1.26)'} (hdfs_api.py:133, req_wrapper) 2025-04-04 18:21:52 [ 632 ] DEBUG : b'' {'Cache-Control': 'no-cache', 'Expires': 'Fri, 04 Apr 2025 18:21:52 GMT, Fri, 04 Apr 2025 18:21:52 GMT', 'Date': 'Fri, 04 Apr 2025 18:21:52 GMT, Fri, 04 Apr 2025 18:21:52 GMT', 'Pragma': 'no-cache, no-cache', 'Content-Type': 'application/octet-stream', 'Location': 'webhdfs://hdfs1:9000/test_hdfsCluster/file2', 'Content-Length': '0', 'Server': 'Jetty(6.1.26)'} (hdfs_api.py:253, write_data) 2025-04-04 18:21:52 [ 632 ] DEBUG : write_data protocol:http host:hdfs1 port:50070 path: /test_hdfsCluster/file3 user:root, principal:None (hdfs_api.py:194, write_data) 2025-04-04 18:21:52 [ 632 ] DEBUG : CALL: {'url': 'http://172.16.1.2:50070/webhdfs/v1/test_hdfsCluster/file3?op=CREATE', 'allow_redirects': False, 'headers': {'host': '172.16.1.2'}, 'params': {'overwrite': 'true'}, 'verify': False, 'auth': None} (hdfs_api.py:131, req_wrapper) 2025-04-04 18:21:52 [ 632 ] DEBUG : Starting new HTTP connection (1): 172.16.1.2:50070 (connectionpool.py:245, _new_conn) 2025-04-04 18:21:52 [ 632 ] DEBUG : http://172.16.1.2:50070 "PUT /webhdfs/v1/test_hdfsCluster/file3?op=CREATE&overwrite=true HTTP/1.1" 307 0 (connectionpool.py:547, _make_request) 2025-04-04 18:21:52 [ 632 ] DEBUG : response_data:b'' headers:{'Cache-Control': 'no-cache', 'Expires': 'Fri, 04 Apr 2025 18:21:52 GMT, Fri, 04 Apr 2025 18:21:52 GMT', 'Date': 'Fri, 04 Apr 2025 18:21:52 GMT, Fri, 04 Apr 2025 18:21:52 GMT', 'Pragma': 'no-cache, no-cache', 'Content-Type': 'application/octet-stream', 'Location': 'http://hdfs1:50075/webhdfs/v1/test_hdfsCluster/file3?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true', 'Content-Length': '0', 'Server': 'Jetty(6.1.26)'} (hdfs_api.py:133, req_wrapper) 2025-04-04 18:21:52 [ 632 ] DEBUG : HDFS api response:{'Cache-Control': 'no-cache', 'Expires': 'Fri, 04 Apr 2025 18:21:52 GMT, Fri, 04 Apr 2025 18:21:52 GMT', 'Date': 'Fri, 04 Apr 2025 18:21:52 GMT, Fri, 04 Apr 2025 18:21:52 GMT', 'Pragma': 'no-cache, no-cache', 'Content-Type': 'application/octet-stream', 'Location': 'http://hdfs1:50075/webhdfs/v1/test_hdfsCluster/file3?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true', 'Content-Length': '0', 'Server': 'Jetty(6.1.26)'} (hdfs_api.py:228, write_data) 2025-04-04 18:21:52 [ 632 ] DEBUG : CALL: {'url': 'http://172.16.1.2:50075/webhdfs/v1/test_hdfsCluster/file3?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true', 'data': b'3\n', 'headers': {'content-type': 'text/plain', 'host': '172.16.1.2'}, 'params': {'file': '/test_hdfsCluster/file3', 'user.name': 'root'}, 'allow_redirects': False, 'verify': False, 'auth': None} (hdfs_api.py:131, req_wrapper) 2025-04-04 18:21:52 [ 632 ] DEBUG : Starting new HTTP connection (1): 172.16.1.2:50075 (connectionpool.py:245, _new_conn) 2025-04-04 18:21:52 [ 632 ] DEBUG : http://172.16.1.2:50075 "PUT /webhdfs/v1/test_hdfsCluster/file3?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true&file=%2Ftest_hdfsCluster%2Ffile3&user.name=root HTTP/1.1" 201 0 (connectionpool.py:547, _make_request) 2025-04-04 18:21:52 [ 632 ] DEBUG : response_data:b'' headers:{'Cache-Control': 'no-cache', 'Expires': 'Fri, 04 Apr 2025 18:21:52 GMT, Fri, 04 Apr 2025 18:21:52 GMT', 'Date': 'Fri, 04 Apr 2025 18:21:52 GMT, Fri, 04 Apr 2025 18:21:52 GMT', 'Pragma': 'no-cache, no-cache', 'Content-Type': 'application/octet-stream', 'Location': 'webhdfs://hdfs1:9000/test_hdfsCluster/file3', 'Content-Length': '0', 'Server': 'Jetty(6.1.26)'} (hdfs_api.py:133, req_wrapper) 2025-04-04 18:21:52 [ 632 ] DEBUG : b'' {'Cache-Control': 'no-cache', 'Expires': 'Fri, 04 Apr 2025 18:21:52 GMT, Fri, 04 Apr 2025 18:21:52 GMT', 'Date': 'Fri, 04 Apr 2025 18:21:52 GMT, Fri, 04 Apr 2025 18:21:52 GMT', 'Pragma': 'no-cache, no-cache', 'Content-Type': 'application/octet-stream', 'Location': 'webhdfs://hdfs1:9000/test_hdfsCluster/file3', 'Content-Length': '0', 'Server': 'Jetty(6.1.26)'} (hdfs_api.py:253, write_data) 2025-04-04 18:21:52 [ 632 ] DEBUG : Executing query select id, _file as file_name, _path as file_path from hdfs('hdfs://hdfs1:9000/test_hdfsCluster/file*', 'TSV', 'id UInt32') order by id on node1 (cluster.py:3677, query) 2025-04-04 18:21:52 [ 632 ] DEBUG : Executing query select id, _file as file_name, _path as file_path from hdfsCluster('test_cluster_two_shards', 'hdfs://hdfs1:9000/test_hdfsCluster/file*', 'TSV', 'id UInt32') order by id on node1 (cluster.py:3677, query) 2025-04-04 18:21:53 [ 632 ] DEBUG : Executing query select id, _file as file_name, _path as file_path from hdfs('hdfs://hdfs1:9000/test_hdfsCluster/file*', 'TSV', 'id UInt32') order by id on node1 (cluster.py:3677, query) 2025-04-04 18:21:53 [ 632 ] DEBUG : Executing query SYSTEM FLUSH LOGS on node1 (cluster.py:3677, query) 2025-04-04 18:21:53 [ 632 ] DEBUG : Executing query SELECT count() FROM system.query_log WHERE type='QueryFinish' AND initial_query_id='c5f3aeee-8bab-4c2e-b660-a66beb3430be' on node1 (cluster.py:3677, query) 2025-04-04 18:21:53 [ 632 ] DEBUG : Executing query SELECT count() FROM system.query_log WHERE type='QueryFinish' AND initial_query_id='a4e5f2b6-a1c9-479c-9db3-b43db243db18' on node1 (cluster.py:3677, query) ____________________________ test_virtual_columns_2 ____________________________ started_cluster = def test_virtual_columns_2(started_cluster): hdfs_api = started_cluster.hdfs_api fs = HdfsClient(hosts=started_cluster.hdfs_ip) table_function = ( f"hdfs('hdfs://hdfs1:9000/parquet_2', 'Parquet', 'a Int32, b String')" ) node1.query(f"insert into table function {table_function} SELECT 1, 'kek'") result = node1.query(f"SELECT _path FROM {table_function}") assert result.strip() == "parquet_2" table_function = ( f"hdfs('hdfs://hdfs1:9000/parquet_3', 'Parquet', 'a Int32, _path String')" ) > node1.query(f"insert into table function {table_function} SELECT 1, 'kek'") test_storage_hdfs/test.py:793: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ helpers/cluster.py:3678: in query return self.client.query( helpers/client.py:39: in wrap return func(self, *args, **kwargs) helpers/client.py:79: in query ).get_answer() _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ self = def get_answer(self): self.process.wait(timeout=DEFAULT_QUERY_TIMEOUT) self.stdout_file.seek(0) self.stderr_file.seek(0) stdout = self.stdout_file.read().decode("utf-8", errors="replace") stderr = self.stderr_file.read().decode("utf-8", errors="replace") if ( self.timer is not None and not self.process_finished_before_timeout and not self.ignore_error ): logging.debug(f"Timed out. Last stdout:{stdout}, stderr:{stderr}") raise QueryTimeoutExceedException("Client timed out!") if ( self.process.returncode != 0 or self.remove_trash_from_stderr(stderr) ) and not self.ignore_error: > raise QueryRuntimeException( "Client failed! Return code: {}, stderr: {}".format( self.process.returncode, stderr ), self.process.returncode, stderr, ) E helpers.client.QueryRuntimeException: Client failed! Return code: 20, stderr: Received exception from server (version 24.12.2): E Code: 20. DB::Exception: Received from 172.16.1.3:9000. DB::Exception: Number of columns doesn't match (source: 2 and result: 1). Stack trace: E E 0. DB::Exception::Exception(DB::Exception::MessageMasked&&, int, bool) @ 0x000000000d78f8db E 1. DB::Exception::Exception(PreformattedMessage&&, int) @ 0x000000000867b58c E 2. DB::Exception::Exception(int, FormatStringHelperImpl::type, std::type_identity::type>, unsigned long&, unsigned long&) @ 0x000000000900aacb E 3. DB::ActionsDAG::makeConvertingActions(std::vector> const&, std::vector> const&, DB::ActionsDAG::MatchColumnsMode, bool, bool, std::unordered_map, std::equal_to, std::allocator>>*) @ 0x000000001173411e E 4. DB::InterpreterInsertQuery::buildInsertSelectPipeline(DB::ASTInsertQuery&, std::shared_ptr) @ 0x0000000012053465 E 5. DB::InterpreterInsertQuery::execute() @ 0x0000000012055f76 E 6. DB::executeQueryImpl(char const*, char const*, std::shared_ptr, DB::QueryFlags, DB::QueryProcessingStage::Enum, DB::ReadBuffer*) @ 0x000000001242beb5 E 7. DB::executeQuery(String const&, std::shared_ptr, DB::QueryFlags, DB::QueryProcessingStage::Enum) @ 0x0000000012426cbd E 8. DB::TCPHandler::runImpl() @ 0x000000001370921e E 9. DB::TCPHandler::run() @ 0x0000000013724498 E 10. Poco::Net::TCPServerConnection::start() @ 0x0000000016639f07 E 11. Poco::Net::TCPServerDispatcher::run() @ 0x000000001663a399 E 12. Poco::PooledThread::run() @ 0x0000000016606cbc E 13. Poco::ThreadImpl::runnableEntry(void*) @ 0x000000001660525d E 14. ? @ 0x00007f78f8aacac3 E 15. ? @ 0x00007f78f8b3e850 E . (NUMBER_OF_COLUMNS_DOESNT_MATCH) E (query: insert into table function hdfs('hdfs://hdfs1:9000/parquet_3', 'Parquet', 'a Int32, _path String') SELECT 1, 'kek') helpers/client.py:248: QueryRuntimeException ------------------------------ Captured log call ------------------------------- 2025-04-04 18:21:53 [ 632 ] DEBUG : Executing query insert into table function hdfs('hdfs://hdfs1:9000/parquet_2', 'Parquet', 'a Int32, b String') SELECT 1, 'kek' on node1 (cluster.py:3677, query) 2025-04-04 18:21:53 [ 632 ] DEBUG : Executing query SELECT _path FROM hdfs('hdfs://hdfs1:9000/parquet_2', 'Parquet', 'a Int32, b String') on node1 (cluster.py:3677, query) 2025-04-04 18:21:53 [ 632 ] DEBUG : Executing query insert into table function hdfs('hdfs://hdfs1:9000/parquet_3', 'Parquet', 'a Int32, _path String') SELECT 1, 'kek' on node1 (cluster.py:3677, query) ---------------------------- Captured log teardown ----------------------------- 2025-04-04 18:21:54 [ 632 ] DEBUG : Command:[docker compose --env-file /ClickHouse/tests/integration/test_storage_hdfs/_instances-2/.env --project-name rootteststoragehdfs --file /ClickHouse/tests/integration/test_storage_hdfs/_instances-2/node1/docker-compose.yml --file /ClickHouse/tests/integration/helpers/../../../tests/integration/compose/docker_compose_hdfs.yml stop --timeout 20] (cluster.py:122, run_and_check) 2025-04-04 18:22:14 [ 632 ] DEBUG : Stderr: Container rootteststoragehdfs-node1-1 Stopping (cluster.py:148, run_and_check) 2025-04-04 18:22:14 [ 632 ] DEBUG : Stderr: Container rootteststoragehdfs-hdfs1-1 Stopping (cluster.py:148, run_and_check) 2025-04-04 18:22:14 [ 632 ] DEBUG : Stderr: Container rootteststoragehdfs-node1-1 Stopped (cluster.py:148, run_and_check) 2025-04-04 18:22:14 [ 632 ] DEBUG : Stderr: Container rootteststoragehdfs-hdfs1-1 Stopped (cluster.py:148, run_and_check) 2025-04-04 18:22:14 [ 632 ] DEBUG : Command:[bash -c [ -f /ClickHouse/tests/integration/test_storage_hdfs/_instances-2/node1/logs/stderr.log ] && zgrep -aH "==================" /ClickHouse/tests/integration/test_storage_hdfs/_instances-2/node1/logs/stderr.log* | ( [ -z "" ] && cat || grep -v "$" ) || true] (cluster.py:122, run_and_check) 2025-04-04 18:22:14 [ 632 ] DEBUG : Command:[docker compose --env-file /ClickHouse/tests/integration/test_storage_hdfs/_instances-2/.env --project-name rootteststoragehdfs --file /ClickHouse/tests/integration/test_storage_hdfs/_instances-2/node1/docker-compose.yml --file /ClickHouse/tests/integration/helpers/../../../tests/integration/compose/docker_compose_hdfs.yml down --volumes] (cluster.py:122, run_and_check) 2025-04-04 18:22:14 [ 632 ] DEBUG : Stderr: Container rootteststoragehdfs-hdfs1-1 Stopping (cluster.py:148, run_and_check) 2025-04-04 18:22:14 [ 632 ] DEBUG : Stderr: Container rootteststoragehdfs-node1-1 Stopping (cluster.py:148, run_and_check) 2025-04-04 18:22:14 [ 632 ] DEBUG : Stderr: Container rootteststoragehdfs-hdfs1-1 Stopped (cluster.py:148, run_and_check) 2025-04-04 18:22:14 [ 632 ] DEBUG : Stderr: Container rootteststoragehdfs-hdfs1-1 Removing (cluster.py:148, run_and_check) 2025-04-04 18:22:14 [ 632 ] DEBUG : Stderr: Container rootteststoragehdfs-node1-1 Stopped (cluster.py:148, run_and_check) 2025-04-04 18:22:14 [ 632 ] DEBUG : Stderr: Container rootteststoragehdfs-node1-1 Removing (cluster.py:148, run_and_check) 2025-04-04 18:22:14 [ 632 ] DEBUG : Stderr: Container rootteststoragehdfs-node1-1 Removed (cluster.py:148, run_and_check) 2025-04-04 18:22:14 [ 632 ] DEBUG : Stderr: Container rootteststoragehdfs-hdfs1-1 Removed (cluster.py:148, run_and_check) 2025-04-04 18:22:14 [ 632 ] DEBUG : Stderr: Network rootteststoragehdfs_default Removing (cluster.py:148, run_and_check) 2025-04-04 18:22:14 [ 632 ] DEBUG : Stderr: Network rootteststoragehdfs_default Removed (cluster.py:148, run_and_check) 2025-04-04 18:22:14 [ 632 ] DEBUG : Cleanup called (cluster.py:894, cleanup) 2025-04-04 18:22:14 [ 632 ] DEBUG : Docker networks for project rootteststoragehdfs are NETWORK ID NAME DRIVER SCOPE (cluster.py:873, print_all_docker_pieces) 2025-04-04 18:22:15 [ 632 ] DEBUG : Docker containers for project rootteststoragehdfs are CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES (cluster.py:881, print_all_docker_pieces) 2025-04-04 18:22:15 [ 632 ] DEBUG : Docker volumes for project rootteststoragehdfs are DRIVER VOLUME NAME (cluster.py:889, print_all_docker_pieces) 2025-04-04 18:22:15 [ 632 ] DEBUG : Command:[docker container list --all --filter name='^/rootteststoragehdfs-.*-1$' --format '{{.ID}}:{{.Names}}'] (cluster.py:122, run_and_check) 2025-04-04 18:22:15 [ 632 ] DEBUG : Unstopped containers: {} (cluster.py:908, cleanup) 2025-04-04 18:22:15 [ 632 ] DEBUG : No running containers for project: rootteststoragehdfs (cluster.py:922, cleanup) 2025-04-04 18:22:15 [ 632 ] DEBUG : Trying to prune unused networks... (cluster.py:928, cleanup) 2025-04-04 18:22:15 [ 632 ] DEBUG : Trying to prune unused images... (cluster.py:944, cleanup) 2025-04-04 18:22:15 [ 632 ] DEBUG : Command:[docker image prune -f] (cluster.py:122, run_and_check) 2025-04-04 18:22:15 [ 632 ] DEBUG : Stdout:Total reclaimed space: 0B (cluster.py:146, run_and_check) 2025-04-04 18:22:15 [ 632 ] DEBUG : Images pruned (cluster.py:947, cleanup) 2025-04-04 18:22:15 [ 632 ] DEBUG : Trying to prune unused volumes... (cluster.py:953, cleanup) 2025-04-04 18:22:15 [ 632 ] DEBUG : Command:[docker volume ls | wc -l] (cluster.py:122, run_and_check) 2025-04-04 18:22:15 [ 632 ] DEBUG : Stdout:1 (cluster.py:146, run_and_check) 2025-04-04 18:22:15 [ 632 ] DEBUG : Volumes pruned: 1 (cluster.py:958, cleanup) ============================== slowest durations =============================== 67.23s setup test_storage_hdfs/test.py::test_hdfsCluster 21.16s teardown test_storage_hdfs/test.py::test_virtual_columns_2 11.63s setup test_settings_randomization/test.py::test_settings_randomization 3.74s teardown test_settings_randomization/test.py::test_settings_randomization 0.98s call test_storage_hdfs/test.py::test_hdfsCluster 0.30s call test_storage_hdfs/test.py::test_virtual_columns_2 0.23s call test_settings_randomization/test.py::test_settings_randomization 0.00s teardown test_storage_hdfs/test.py::test_hdfsCluster 0.00s setup test_storage_hdfs/test.py::test_virtual_columns_2 =========================== short test summary info ============================ FAILED test_settings_randomization/test.py::test_settings_randomization - Ass... FAILED test_storage_hdfs/test.py::test_hdfsCluster - AssertionError: assert 1... FAILED test_storage_hdfs/test.py::test_virtual_columns_2 - helpers.client.Que... ======================== 3 failed in 105.54s (0:01:45) ========================= Traceback (most recent call last): File "/home/ubuntu/_work/ClickHouse/ClickHouse/tests/integration/./runner", line 528, in subprocess.check_call(cmd, shell=True) File "/usr/lib/python3.10/subprocess.py", line 369, in check_call raise CalledProcessError(retcode, cmd) subprocess.CalledProcessError: Command 'docker run --rm --name clickhouse_integration_tests_jwabdd --privileged --dns-search='.' --memory=30709035008 --volume=/home/ubuntu/_work/_temp/test/build/clickhouse-odbc-bridge:/clickhouse-odbc-bridge --volume=/home/ubuntu/_work/_temp/test/build/clickhouse:/clickhouse --volume=/home/ubuntu/_work/_temp/test/build/clickhouse-library-bridge:/clickhouse-library-bridge --volume=/home/ubuntu/_work/ClickHouse/ClickHouse/programs/server:/clickhouse-config --volume=/home/ubuntu/_work/ClickHouse/ClickHouse/tests/integration:/ClickHouse/tests/integration --volume=/home/ubuntu/_work/ClickHouse/ClickHouse/utils/backupview:/ClickHouse/utils/backupview --volume=/home/ubuntu/_work/ClickHouse/ClickHouse/utils/grpc-client/pb2:/ClickHouse/utils/grpc-client/pb2 --volume=/run:/run/host:ro --volume=clickhouse_integration_tests_volume:/var/lib/docker -e DOCKER_DOTNET_CLIENT_TAG=11de0b29a15d -e DOCKER_HELPER_TAG=5dc43a6382f0 -e DOCKER_BASE_TAG=6712d5cc610d -e DOCKER_KERBEROS_KDC_TAG=9391ecdee8d7 -e DOCKER_MYSQL_GOLANG_CLIENT_TAG=9bec2a638e6e -e DOCKER_MYSQL_JAVA_CLIENT_TAG=766bff31cfe4 -e DOCKER_MYSQL_JS_CLIENT_TAG=41ba7c2ec2a1 -e DOCKER_MYSQL_PHP_CLIENT_TAG=88be89c1e3b6 -e DOCKER_NGINX_DAV_TAG=b55ac9cd7519 -e DOCKER_POSTGRESQL_JAVA_CLIENT_TAG=a4eff5c7f4d6 -e DOCKER_PYTHON_BOTTLE_TAG=caad4729259e -e DOCKER_CLIENT_TIMEOUT=300 -e COMPOSE_HTTP_TIMEOUT=600 -e PYTHONUNBUFFERED=1 -e PYTEST_ADDOPTS=" -rfEps --run-id=2 --color=no --durations=0 test_settings_randomization/test.py::test_settings_randomization test_storage_hdfs/test.py::test_hdfsCluster test_storage_hdfs/test.py::test_virtual_columns_2 -vvv" altinityinfra/integration-tests-runner:cd6390247eca ' returned non-zero exit status 1.